Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-17 Thread Dale
Dale wrote:
>
> I have to say, mobos and CPUs have come a long ways since my last build
> about 10 or 11 years ago.  When the ASUS first booted and I went into
> the BIOS thing, is it still called BIOS, it was very different.  I think
> my current rig allows you to use the mouse.  It's slow tho.  This ASUS
> is vastly improved.  It gives you a LOT more control and I'm not even
> interesting in trying to overclock or anything.  Just the fan controls
> are a huge improvement.  I suspect most newer mobos are all that way.
>
> I'm going to have to get used to seeing CPU temps in the 190F area I
> guess.  But dang, that's hot.  I suspect tho that if the sensor was in
> the same place on my current rig, it may measure that high as well, deep
> inside the chip.  It may be giving a temp where the CPU is always
> cooler, closer to the top where the CPU cooler is touching or
> something.  Placement of the sensor is key.   
>
> Still, if I could get it down to 140F or even 150F, I'd be happier. 
>
> Dang rig is snappy tho.  :-D 
>
> Dale
>
> :-)  :-) 
>


I adjusted the fan speed curves in the BIOS and it is a little cooler. 
Not much tho.  I'm convinced this has more to do with where the sensor
is located than anything else.  It is likely in a different place than
my current FX series CPU.  It could be more accurate than my FX series
CPU too.  Anyway, I got it where it is quiet at idle but the fans spin
up pretty quickly when the CPU starts getting loaded pretty good.  It's
likely as good as it is going to get.  Even at full speed, I can barely
hear the thing.  I kinda wish the Fractal case had those 200mm fans like
my Cooler master and a side panel fan.  Those things even at full speed
are dead quiet.  I have to look at them to confirm they are spinning
after I do work in there.  That or feel the air moving. 

One other thing I found, while installing one of the front fans, the one
closest to the top, I had put it in backward when installing it.  It was
blowing out not sucking in.  I reversed it after adjusting the fan
curves.  It was one of the fans I bought and it is a pretty hefty fan. 
It has a higher CFM rating than the Fractal fans.  So, finding that
error and fixing it will be a good thing.  It does blow a lot.  It also
blows air right on the CPU cooler.  Of all the fans to put in backwards. 

At this point, I'm mostly waiting on the new video card.  Oh, I also
installed the Nvidia drivers and the GUI is a lot better and faster. 
The mouse pointer moves nice and smooth.  Before things were slow to
respond and the mouse pointer was kinda jerky.  The novoau (sp?) drivers
just don't cut it. 

I got to take pics and upload somewhere.  :/  I think I have a account
somewhere.

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Dale
Wols Lists wrote:
> On 16/06/2024 09:40, Michael wrote:
>> Now to get temp sensors and stuff to work.  I want to keep a eye on
>> temps for a bit.  I think the boot media was reporting the wrong
>> info.
>> Even the ambient temp was to high for this cool room.  It showed
>> like
>> 100F or something when my A/C is set to 68F or so.  Plus, the
>> side is
>> off the case at times.  New battle.  
>
>> The side panel should help improve air flow through the case
>> (depending on the
>> design).  I've seen CPU temperatures on big tower servers with dual
>> xeon CPUs
>> going up when the side panel was removed.
>
> My previous case had a cone connecting the CPU to a vent in the side.
> Whether that sucked warm case air over the CPU and vented it directly
> out, or sucked cool outside air over the CPU and vented it into the
> case, I don't know. Either way, the air flowing over the CPU was
> outside the case just before or after doing so.
>
> Cheers,
> Wol
>
>


I've seen factory puters have a duct thing that goes from the CPU fan to
the rear of the case also.  That way the heat from the CPU is never
vented inside the case.  I've also seen a few that are like you
mentioned.  I kinda liked the ones that remove the CPU heat from the
case to the rear myself.  That helps keep the other components cooler
since the heat from the CPU isn't released into the case. 

A lot of case makers use different tactics to control heat.  To me tho,
the best is a fan on the top of the case.  Every case that I've had that
had top fans ran cooler.  Heat wants to rise anyway so it's natural for
heat to go to the top.  As I mentioned elsewhere, I really like a side
fan as well.  They keep the mobo cool by providing fresh air to
everything from the video card to other components on the mobo.  When I
remove the side from my Cooler Master, the one thing that really gets
affected, the video card.

I think most case makers really put thought into cooling.  They want the
flashy crap to sadly but regardless of looks, cooling has to be a very
high priority.  Some come up with some inventive ways to do it. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Wols Lists

On 16/06/2024 09:40, Michael wrote:

Now to get temp sensors and stuff to work.  I want to keep a eye on
temps for a bit.  I think the boot media was reporting the wrong info.
Even the ambient temp was to high for this cool room.  It showed like
100F or something when my A/C is set to 68F or so.  Plus, the side is
off the case at times.  New battle.  



The side panel should help improve air flow through the case (depending on the
design).  I've seen CPU temperatures on big tower servers with dual xeon CPUs
going up when the side panel was removed.


My previous case had a cone connecting the CPU to a vent in the side. 
Whether that sucked warm case air over the CPU and vented it directly 
out, or sucked cool outside air over the CPU and vented it into the 
case, I don't know. Either way, the air flowing over the CPU was outside 
the case just before or after doing so.


Cheers,
Wol



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Dale
Rich Freeman wrote:
> On Sun, Jun 16, 2024 at 12:55 AM Dale  wrote:
>> Besides, for the wattage
>> the CPU uses, the cooler I have is waay overkill.  I think my cooler
>> is rated well above 200 watts.  The CPU is around 100 watts, 105 I think
>> or maybe 95.
> So, I am just picking someplace a little random to reply to all of this.
>
> Normal temps vary by CPU model and you need to look up what is expected.
>
> All modern CPUs will throttle to maintain below a certain temp, and so
> if you have thermal issues you'll just get lower performance.
>
> A cooler might dissipate a certain amount of power, but that is going
> to be at a particular temp.  Obviously a radiator that is at ambient
> temperature will dissipate no heat at all.
>
> The external temp of the CPU has nothing to do with the internal temp
> of the CPU, and a modern CPU can generate MUCH more heat than it can
> internally transfer to the surface of the die, and so internally it
> will heat up even if you use liquid cooling.
>
> As far as governors go, I'm not sure what is even recommended with
> Linux with modern CPUs.  Most modern CPUs and their firmware manage
> heat/power based on performance limits.  AMD calls this
> Performance-based Overclocking, but it is basically how they work even
> up to factory clock rates.  Assuming you meet the cooling/power
> requirements the CPU can sustain a particular frequency on all its
> cores at once, and a higher frequency on only one core if the rest are
> idle, and then it has a maximum frequency that a small number of cores
> can temporarily exceed but internal temperature will rise when this
> happens until throttling kicks in (I think this is at least in part
> firmware modeled and not exclusively based on sensor data).  This is
> all by design in a desktop CPU, and allows a CPU to have significantly
> better burst performance than sustained performance, which is a good
> approach as desktop loads tend to be bursty.  I imagine server
> processors (like enterprise SSDs) are optimized more around sustained
> performance as they tend to be operated more at load.
>
> I suspect that the most recent CPU generations will work best if the
> hardware is allowed to manage frequency, with the OS at most being
> used to communicate whether a core is idle or not.
>

I have to say, mobos and CPUs have come a long ways since my last build
about 10 or 11 years ago.  When the ASUS first booted and I went into
the BIOS thing, is it still called BIOS, it was very different.  I think
my current rig allows you to use the mouse.  It's slow tho.  This ASUS
is vastly improved.  It gives you a LOT more control and I'm not even
interesting in trying to overclock or anything.  Just the fan controls
are a huge improvement.  I suspect most newer mobos are all that way.

I'm going to have to get used to seeing CPU temps in the 190F area I
guess.  But dang, that's hot.  I suspect tho that if the sensor was in
the same place on my current rig, it may measure that high as well, deep
inside the chip.  It may be giving a temp where the CPU is always
cooler, closer to the top where the CPU cooler is touching or
something.  Placement of the sensor is key.   

Still, if I could get it down to 140F or even 150F, I'd be happier. 

Dang rig is snappy tho.  :-D 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Dale
Mark Knecht wrote:
>
> Dale - sorry to bother you.
>
> Mark

No bother at all.  Could learn something.  FYI.  I read most every post
on this list.  Unless it is something I know absolutely nothing about or
don't use at all, I read the posts.  I just might learn something. 

I might add, stress-ng does a good job of putting a load on a CPU. 
According to htop, it maxes every core/thread out to 100% and stays
there.  I think it does memory to.  Never used it for that tho.  Might
could test the m.2 stick with it too.  ;-) 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Dale
Peter Humphrey wrote:
> On Sunday, 16 June 2024 14:35:34 BST Dale wrote:
>
>> I mentioned I found the correct drivers for the CPU and other temps
>> sensors but needed to reboot.
> What sensors are you using now? I just rely on what gkrellm finds; where it 
> shows more than one CPU or GPU temp I choose the highest one.
>


I finally got around to setting up the GUI part.  I ran gkrellm, love
that thing, and figured out the temps.  When on command line tho, I
stuck a command that filters out the working stuff and spits the info
out for me.  This is the command I use.

sensors -f | egrep '(Tctl|Tccd1|GPU
core|fan1|temp1|fan2|fan3|Adapter|Sensor 1|Sensor 2)'

It looks like this: 


Gentoo-1 ~ # /root/temps
Adapter: PCI adapter
Tctl:    +130.3°F 
Tccd1:    +94.1°F 
Adapter: PCI adapter
GPU core:    900.00 mV (min =  +0.78 V, max =  +1.16 V)
fan1:    2670 RPM
temp1:   +109.4°F  (high = +203.0°F, hyst = +37.4°F)
Adapter: ISA adapter
fan1:    0 RPM  (min =    0 RPM)
fan2: 1262 RPM  (min =    0 RPM)
fan3:  940 RPM  (min =    0 RPM)
Adapter: PCI adapter
Sensor 1:    +107.3°F  (low  = -459.7°F, high = +117503.3°F)
Sensor 2: +94.7°F  (low  = -459.7°F, high = +117503.3°F)
Gentoo-1 ~ #



That gets CPU, GPU, fans, and I think the m.2 stick is on the bottom. 
Basically, I get everything that is important.  If needed, I just run
sensors -f and get all of it, even things that don't work or have
nothing connected to them. The biggest thing, getting the right drivers
in the kernel so that it can see the stuff correctly. 

I did a duck search.  I don't use Google anymore because of the captcha
thing always popping up. Anyway, I found this web page. 

https://www.linux.com/topic/desktop/advanced-lm-sensors-tips-and-tricks-linux-0/

If you scroll down a bit, it tells what some of the sensors are for. 

I also found a video on youtube that talks about how to set the fan
controllers up in the BIOS. I didn't know that I was supposed to set up
the fans before booting and installing things.  Anyway, I made it a
little faster to respond and make the fans really spin up when things
warm up.  When I ran stress, in seconds I could hear the fans speed up. 
I had to mute the TV to hear them tho.  The case has six 140mm fans. 
Three in front, two on top and one in the back.  The power supply kinda
does its own thing.  Linky to video.

https://www.youtube.com/watch?v=65vsrzYOLZg

I may adjust the CPU fan a little bit next time I boot up.  I'll see
what it looks like and go from there.  It still runs warmer than I think
it should.  Also, I put my finger on the base of the CPU cooler, it
wasn't even warm and it shows it is around 190F.  That sensor must be
deep in the CPU die or something.  Nothing in the mobo is very warm to
the touch.  The VRM heat sinks are a tad bit warm but nothing to worry
about. 

I may do some tweaking but as far as the cooling goes, I think I'm OK. 
It shouldn't blow smoke.  I hope. 

Oh, I tried the kernel video drivers, that thing is slow.  I may switch
to the Nvidia drivers and see how that goes.  Thing is, I have another
card coming in.  I may wait until it gets here. 

Dale

:-)  :-)



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Rich Freeman
On Sun, Jun 16, 2024 at 12:55 AM Dale  wrote:
>
> Besides, for the wattage
> the CPU uses, the cooler I have is waay overkill.  I think my cooler
> is rated well above 200 watts.  The CPU is around 100 watts, 105 I think
> or maybe 95.

So, I am just picking someplace a little random to reply to all of this.

Normal temps vary by CPU model and you need to look up what is expected.

All modern CPUs will throttle to maintain below a certain temp, and so
if you have thermal issues you'll just get lower performance.

A cooler might dissipate a certain amount of power, but that is going
to be at a particular temp.  Obviously a radiator that is at ambient
temperature will dissipate no heat at all.

The external temp of the CPU has nothing to do with the internal temp
of the CPU, and a modern CPU can generate MUCH more heat than it can
internally transfer to the surface of the die, and so internally it
will heat up even if you use liquid cooling.

As far as governors go, I'm not sure what is even recommended with
Linux with modern CPUs.  Most modern CPUs and their firmware manage
heat/power based on performance limits.  AMD calls this
Performance-based Overclocking, but it is basically how they work even
up to factory clock rates.  Assuming you meet the cooling/power
requirements the CPU can sustain a particular frequency on all its
cores at once, and a higher frequency on only one core if the rest are
idle, and then it has a maximum frequency that a small number of cores
can temporarily exceed but internal temperature will rise when this
happens until throttling kicks in (I think this is at least in part
firmware modeled and not exclusively based on sensor data).  This is
all by design in a desktop CPU, and allows a CPU to have significantly
better burst performance than sustained performance, which is a good
approach as desktop loads tend to be bursty.  I imagine server
processors (like enterprise SSDs) are optimized more around sustained
performance as they tend to be operated more at load.

I suspect that the most recent CPU generations will work best if the
hardware is allowed to manage frequency, with the OS at most being
used to communicate whether a core is idle or not.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Mark Knecht
On Sun, Jun 16, 2024 at 5:59 AM Frank Steinmetzger  wrote:
>
> Am Sat, Jun 15, 2024 at 04:07:28PM -0700 schrieb Mark Knecht:
>
> >Now, the fun part. I wrote you a little Python program which on
> > my system is called Dales_Loop.py. This program has 3
> > parameters - a value to count to, the number of cores to be used,
> > and a timeout value to stop the program. Using a program like
> > this can give you repeatable results.
>
> FYI, there is a problem with your approach: python is not capable of true
> multiprocessing. While you can have multiple threads in your programm, in
> the end they are executed by a single thread in a time-sharing manner.
>
> This problem is known as the GIL—the Global Interpreter Lock. Unless you
use
> an external program to do the actual CPU work, i.e. let the linux kernel
do
> the actual parallelism and not python, your program is not faster than
doing
> everything in a single loop.
>
> See this page for a nice example which does basically the same as your
> program (heading “The Impact on Multi-Threaded Python Programs”),
including
> some comparative benchmarking between single loop and threaded loops:
> https://realpython.com/python-gil/
>

I'm sorry Frank but apparently you didn't read the code. Indeed the GIL
i an  issue but this program uses the multiprocessing library and starts
processes, not threads. Running the program for 30 seconds with
different values of num_processes I see

1   2212
2   
3   6107
4   8199
5   10174

I additionally I see roughly the number of num_process python
jobs running in btop at all times the program is running clearly
demonstrating each has its own process ID and are being
managed by the kernel. As each process is completed a new
one is started with a new process number.

I use this library successfully in a home built AI program I'm writing
and I clearly get roughly through the number of processes * data
sets during training, validation and testing.

If someone wants to use a packaged stress test program I
see nothing wrong with that. If you don't want to read the
code and understand it for yourself, then I see nothing wrong with that
either, but please, if you're gonna talk to me about a little code
snippet then at least read it , run it and understand it first.

Dale - sorry to bother you.

Mark


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Peter Humphrey
On Sunday, 16 June 2024 14:35:34 BST Dale wrote:

> I mentioned I found the correct drivers for the CPU and other temps
> sensors but needed to reboot.

What sensors are you using now? I just rely on what gkrellm finds; where it 
shows more than one CPU or GPU temp I choose the highest one.

-- 
Regards,
Peter.






Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Dale
Michael wrote:
> On Sunday, 16 June 2024 09:40:57 BST you wrote:
>> On Sunday, 16 June 2024 05:55:45 BST Dale wrote:
>>> William Kenworthy wrote:
 On 16/6/24 07:07, Mark Knecht wrote:
> 
>
>> I still don't understand the efi thing.  I'm booted up tho.  I'm
> happy.
>
>> Now to get temp sensors and stuff to work.  I want to keep a eye on
>> temps for a bit.  I think the boot media was reporting the wrong
>> info.
>> Even the ambient temp was to high for this cool room.  It showed like
>> 100F or something when my A/C is set to 68F or so.  Plus, the side is
>> off the case at times.  New battle.  ;-)
>> The side panel should help improve air flow through the case (depending on
>> the design).  I've seen CPU temperatures on big tower servers with dual
>> xeon CPUs going up when the side panel was removed.
>>


This is very true of a LOT of cases.  This one is so open and the way
the fans blow, it doesn't seem to make much difference.  I usually only
take the side off when I'm looking at something or measuring temps of
different things with my IR thing.  Some things are warmer than I'd like
but they have factory heatsinks.  On my current rig tho which has a side
fan, one of those large 200mm things, it makes a big difference on both
the CPU and the video card.  It is truly amazing how much difference
that side fan makes even tho it is blowing only a moderate amount of
air.  With the side on, I think my Cooler Master HAF-932 case cools
better than the Fractal.  If the Fractal had a side fan, it would win
hands down.  Thing is, the side is glass so no cutting allowed. 

>> It used to be the case the thermal paste would dry out and needed replacing
>> within 5 years or so.  These days the top end thermal paste lasts longer and
>> it is much more expensive, but I'm yet to find out how long it lasts.  ;-)

I agree.  I bought a couple syringes of some top dollar thermal grease
several years ago, likely with the last build.  I have enough to last a
really long time.  One tube is Arctic Silver I think.  Then I bought
another brand that is supposed to be even better than the Arctic
Silver.  I can't recall the name.  Some of my tubes have silver in it to
some degree.  I've never had any of the good ones to dry out.  I have
had some cheap stuff I use on transistors to dry out tho.  It's some
pink stuff with no brand on it.  No silver either. 


>Now, the fun part. I wrote you a little Python program which on
>> [snip ...]
>>
>>> My complaint, the temps sensors is reporting is way higher than my IR
>>> thermometer says.  Even what I think is the ambient temp is way off.
>>> I've googled and others report the same thing.  During one compile, I
>>> pointed the IR sensor right at the base of the CPU cooler.  It may not
>>> be as hot as the CPU is but it is closer than anything else.  I measured
>>> like 80F or something
>> That's approximating the TCase, but you're still not close enough to measure
>> that temperature.  You'd need to delid the CPU for this ... definitely NOT
>> recommended.
>>
>>> while sensors was reporting above 140F or so.
>> That's the TjMax and for your 5800X CPU this is comfortably within the TjMax
>> temperature of 194°F (90°C):
>>
>> https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ry
>> zen-7-5800x.html#product-specs

See below for more info.  I thought it had the wrong sensor driver and
those temps are just wrong.  The BIOS, or whatever it is called
nowadays, also shows much lower temps. 

>>> I
>>> can see a little difference but not that much.  Besides, for the wattage
>>> the CPU uses, the cooler I have is waay overkill.  I think my cooler
>>> is rated well above 200 watts.  The CPU is around 100 watts, 105 I think
>>> or maybe 95.
>> 105W - see link above.

I thought it was either 5 watts high or low.  I wasn't 100% sure. 

>>> Plus, this room is fairly cold.  A/C currently set to
>>> 68F.  One can dispute the CPU temp I guess but not the ambient temp.  If
>>> one is off, I suspect both are off.
>> Not necessarily - where is the ambient temperature sensor located?
>>

My current Gigabyte has one on the mobo somewhere.  I never have found
it tho.  With the side off and a large fan blowing on it, it gets real
close to room temp.  With the side on, it does warm up a little, likely
just because of the way the air flows around in there and some warm
components around it.  I'm not real sure on this ASUS but I've never
seen a mobo without a ambient temp sensor.  I've read the BIOS uses it
to help determine the speed of the case fans if they are connected to
the mobo.  I think my front fan and top fan is connected to the mobo on
current rig and when it warms up inside, even if the CPU isn't that
warm, the front fan and top fan speeds up.  I sometimes turn the A/C off
and forget.  When I do that, the mobo controlled case fans spins up even
tho the CPU is idle.  It seems to like 85F or so.  The side fan is
connected to 

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Frank Steinmetzger
Am Sat, Jun 15, 2024 at 04:07:28PM -0700 schrieb Mark Knecht:

>Now, the fun part. I wrote you a little Python program which on
> my system is called Dales_Loop.py. This program has 3
> parameters - a value to count to, the number of cores to be used,
> and a timeout value to stop the program. Using a program like
> this can give you repeatable results.

FYI, there is a problem with your approach: python is not capable of true 
multiprocessing. While you can have multiple threads in your programm, in 
the end they are executed by a single thread in a time-sharing manner.

This problem is known as the GIL—the Global Interpreter Lock. Unless you use 
an external program to do the actual CPU work, i.e. let the linux kernel do 
the actual parallelism and not python, your program is not faster than doing 
everything in a single loop.

See this page for a nice example which does basically the same as your 
program (heading “The Impact on Multi-Threaded Python Programs”), including 
some comparative benchmarking between single loop and threaded loops:
https://realpython.com/python-gil/

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

At night, when everybody sleeps, usually there’s nobody awake.


signature.asc
Description: PGP signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Michael
On Sunday, 16 June 2024 09:40:57 BST you wrote:
> On Sunday, 16 June 2024 05:55:45 BST Dale wrote:
> > William Kenworthy wrote:
> > > On 16/6/24 07:07, Mark Knecht wrote:
> > >> 
> > >> 
> > >> > I still don't understand the efi thing.  I'm booted up tho.  I'm
> > >> 
> > >> happy.
> > >> 
> > >> > Now to get temp sensors and stuff to work.  I want to keep a eye on
> > >> > temps for a bit.  I think the boot media was reporting the wrong
> > >> > info.
> > >> > Even the ambient temp was to high for this cool room.  It showed like
> > >> > 100F or something when my A/C is set to 68F or so.  Plus, the side is
> > >> > off the case at times.  New battle.  ;-)
> 
> The side panel should help improve air flow through the case (depending on
> the design).  I've seen CPU temperatures on big tower servers with dual
> xeon CPUs going up when the side panel was removed.
> 
> > >> > Dale
> > >> 
> > >> 
> > >> 
> > >> Hi Dale,
> > >> 
> > >>Congrats on getting your new machine working. I think you've
> > >>received
> > >> 
> > >> a lot of good info on temperature effects but there is one thing I
> > >> didn't
> > >> see anyone talking about so I'll mention it here. (Note - my career was
> > >> chip design in Silicon Valley so I'm speaking from experience in both
> > >> chips and PCs that use them.
> > >> 
> > >>First, don't worry too much about high temperatures hurting your
> > >> 
> > >> processor or the chips in the system. They can stand up to 70C
> > >> pretty much forever and 100C for long periods of time. Long before
> > >> anything would get damaged at the chip level, if it ever gets damaged,
> > >> you are going to have timing problems that would either cause the
> > >> system to crash, corrupt data, or both, so temps are important
> > >> but it won't be damage to the processor. (Assuming it's a good
> > >> chip that meets all specs and is well tested which I'm sure yours
> > >> is.
> > >> 
> > >>The thing I think you should be aware of is that long-term high
> > >> 
> > >> temps, while they don't hurt the processor, can very possibly degrade
> > >> the thermal paste that is between your processor or M.2 chips
> > >> and their heat sinks & fans. Thermal paste can and will degrade
> > >> of time and high temps make it degrade faster so the temps you
> > >> see today may not be the same as what you see 2 or 3 years from
> > >> now.
> 
> It used to be the case the thermal paste would dry out and needed replacing
> within 5 years or so.  These days the top end thermal paste lasts longer and
> it is much more expensive, but I'm yet to find out how long it lasts.  ;-)
> > >>Now, the fun part. I wrote you a little Python program which on
> 
> [snip ...]
> 
> > My complaint, the temps sensors is reporting is way higher than my IR
> > thermometer says.  Even what I think is the ambient temp is way off.
> > I've googled and others report the same thing.  During one compile, I
> > pointed the IR sensor right at the base of the CPU cooler.  It may not
> > be as hot as the CPU is but it is closer than anything else.  I measured
> > like 80F or something
> 
> That's approximating the TCase, but you're still not close enough to measure
> that temperature.  You'd need to delid the CPU for this ... definitely NOT
> recommended.
> 
> > while sensors was reporting above 140F or so.
> 
> That's the TjMax and for your 5800X CPU this is comfortably within the TjMax
> temperature of 194°F (90°C):
> 
> https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ry
> zen-7-5800x.html#product-specs
> > I
> > can see a little difference but not that much.  Besides, for the wattage
> > the CPU uses, the cooler I have is waay overkill.  I think my cooler
> > is rated well above 200 watts.  The CPU is around 100 watts, 105 I think
> > or maybe 95.
> 
> 105W - see link above.
> 
> > Plus, this room is fairly cold.  A/C currently set to
> > 68F.  One can dispute the CPU temp I guess but not the ambient temp.  If
> > one is off, I suspect both are off.
> 
> Not necessarily - where is the ambient temperature sensor located?
> 
> > Oh, the CPU fan isn't spinning fast
> > either.  I'd guess it isn't even running at half speed even when
> > compiling and htop shows all cores/threads at the max.
> 
> Your UEFI (BIOS) menu should have settings for tweaking the fans and
> changing their cooling profile to make them quieter, or spin them up
> sooner.  Start with default settings and tune it up/down from there to
> match your needs.

Take a look at the CPU Thermal Expectations in this article:

https://www.pcgamer.com/amd-views-ryzen-5000-cpu-temperatures-up-to-95c-as-typical-and-by-design/


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Michael
On Sunday, 16 June 2024 05:55:45 BST Dale wrote:
> William Kenworthy wrote:
> > On 16/6/24 07:07, Mark Knecht wrote:
> >> 
> >> 
> >> > I still don't understand the efi thing.  I'm booted up tho.  I'm
> >> 
> >> happy.
> >> 
> >> > Now to get temp sensors and stuff to work.  I want to keep a eye on
> >> > temps for a bit.  I think the boot media was reporting the wrong info.
> >> > Even the ambient temp was to high for this cool room.  It showed like
> >> > 100F or something when my A/C is set to 68F or so.  Plus, the side is
> >> > off the case at times.  New battle.  ;-)

The side panel should help improve air flow through the case (depending on the 
design).  I've seen CPU temperatures on big tower servers with dual xeon CPUs 
going up when the side panel was removed.


> >> > Dale
> >> 
> >> 
> >> 
> >> Hi Dale,
> >>Congrats on getting your new machine working. I think you've received
> >> a lot of good info on temperature effects but there is one thing I
> >> didn't
> >> see anyone talking about so I'll mention it here. (Note - my career was
> >> chip design in Silicon Valley so I'm speaking from experience in both
> >> chips and PCs that use them.
> >> 
> >>First, don't worry too much about high temperatures hurting your
> >> processor or the chips in the system. They can stand up to 70C
> >> pretty much forever and 100C for long periods of time. Long before
> >> anything would get damaged at the chip level, if it ever gets damaged,
> >> you are going to have timing problems that would either cause the
> >> system to crash, corrupt data, or both, so temps are important
> >> but it won't be damage to the processor. (Assuming it's a good
> >> chip that meets all specs and is well tested which I'm sure yours
> >> is.
> >> 
> >>The thing I think you should be aware of is that long-term high
> >> temps, while they don't hurt the processor, can very possibly degrade
> >> the thermal paste that is between your processor or M.2 chips
> >> and their heat sinks & fans. Thermal paste can and will degrade
> >> of time and high temps make it degrade faster so the temps you
> >> see today may not be the same as what you see 2 or 3 years from
> >> now.

It used to be the case the thermal paste would dry out and needed replacing 
within 5 years or so.  These days the top end thermal paste lasts longer and 
it is much more expensive, but I'm yet to find out how long it lasts.  ;-)


> >>Now, the fun part. I wrote you a little Python program which on
[snip ...]

> My complaint, the temps sensors is reporting is way higher than my IR
> thermometer says.  Even what I think is the ambient temp is way off. 
> I've googled and others report the same thing.  During one compile, I
> pointed the IR sensor right at the base of the CPU cooler.  It may not
> be as hot as the CPU is but it is closer than anything else.  I measured
> like 80F or something 

That's approximating the TCase, but you're still not close enough to measure 
that temperature.  You'd need to delid the CPU for this ... definitely NOT 
recommended.  

> while sensors was reporting above 140F or so.

That's the TjMax and for your 5800X CPU this is comfortably within the TjMax 
temperature of 194°F (90°C):

https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ryzen-7-5800x.html#product-specs


> I
> can see a little difference but not that much.  Besides, for the wattage
> the CPU uses, the cooler I have is waay overkill.  I think my cooler
> is rated well above 200 watts.  The CPU is around 100 watts, 105 I think
> or maybe 95.

105W - see link above.

> Plus, this room is fairly cold.  A/C currently set to
> 68F.  One can dispute the CPU temp I guess but not the ambient temp.  If
> one is off, I suspect both are off.

Not necessarily - where is the ambient temperature sensor located?

> Oh, the CPU fan isn't spinning fast
> either.  I'd guess it isn't even running at half speed even when
> compiling and htop shows all cores/threads at the max.

Your UEFI (BIOS) menu should have settings for tweaking the fans and changing 
their cooling profile to make them quieter, or spin them up sooner.  Start 
with default settings and tune it up/down from there to match your needs.


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Dale
William Kenworthy wrote:
>
> On 16/6/24 07:07, Mark Knecht wrote:
>> 
>> > I still don't understand the efi thing.  I'm booted up tho.  I'm
>> happy.
>> > Now to get temp sensors and stuff to work.  I want to keep a eye on
>> > temps for a bit.  I think the boot media was reporting the wrong info.
>> > Even the ambient temp was to high for this cool room.  It showed like
>> > 100F or something when my A/C is set to 68F or so.  Plus, the side is
>> > off the case at times.  New battle.  ;-)
>> >
>> > Dale
>> 
>>
>> Hi Dale,
>>    Congrats on getting your new machine working. I think you've received
>> a lot of good info on temperature effects but there is one thing I
>> didn't
>> see anyone talking about so I'll mention it here. (Note - my career was
>> chip design in Silicon Valley so I'm speaking from experience in both
>> chips and PCs that use them.
>>
>>    First, don't worry too much about high temperatures hurting your
>> processor or the chips in the system. They can stand up to 70C
>> pretty much forever and 100C for long periods of time. Long before
>> anything would get damaged at the chip level, if it ever gets damaged,
>> you are going to have timing problems that would either cause the
>> system to crash, corrupt data, or both, so temps are important
>> but it won't be damage to the processor. (Assuming it's a good
>> chip that meets all specs and is well tested which I'm sure yours
>> is.
>>
>>    The thing I think you should be aware of is that long-term high
>> temps, while they don't hurt the processor, can very possibly degrade
>> the thermal paste that is between your processor or M.2 chips
>> and their heat sinks & fans. Thermal paste can and will degrade
>> of time and high temps make it degrade faster so the temps you
>> see today may not be the same as what you see 2 or 3 years from
>> now.
>>
>>    Now, the fun part. I wrote you a little Python program which on
>> my system is called Dales_Loop.py. This program has 3
>> parameters - a value to count to, the number of cores to be used,
>> and a timeout value to stop the program. Using a program like
>> this can give you repeatable results. I use btop in a second
>> terminal to watch individual core temps As provided it will
>> loop 1,000,000 on 4 cores in parallel. When it finishes the
>> count it will start another process and count again. It will
>> do this for 30 seconds and then stop. When finished it will
>> tell you how many processes it ran over the complete test.
>>
>>    If you wanted to do other things inside the loop, like floating
>> point math or things that would stress the machine in other
>> ways you can add that to the subroutine.
>>
>>    Anyway, you can start with 4 cores, up the time value
>> to run the test longer, up the count value to run each
>> process longer, and most fun, raise the number of cores
>> to start using more of the processor. On my Ryzen 9
>> 5950X, which is water cooled, I don't get much fan reaction
>> until I'm using 16 of the 32 threads.
>>
>>    Best wishes for you and your new rig.
>>
>> Cheers,
>> Mark
>>
>>
>>
>> import multiprocessing
>> import time
>>
>> def count_to_large_number(count_value):
>>     for i in range(count_value):
>>         pass  # Replace with your desired computation or task
>>
>> def main():
>>     num_processes = 4
>>     count_value = 100
>>     runtime_seconds = 30
>>
>>     processes = []
>>     start_time = time.time()
>>     total_processes_started = 0
>>
>>     while time.time() - start_time < runtime_seconds:
>>         for process in processes:
>>             if not process.is_alive():
>>                 processes.remove(process)
>>
>>         while len(processes) < num_processes:
>>             process =
>> multiprocessing.Process(target=count_to_large_number,
>> args=(count_value,))
>>             processes.append(process)
>>             process.start()
>>             total_processes_started += 1
>>
>>     for process in processes:
>>         process.join()
>>
>>     print(f"Total processes started: {total_processes_started}")
>>
>> if __name__ == "__main__":
>>     main()
>>
> or use app-benchmarks/stress
>
> BillK

That's the plan.  I'm still installing KDE in bits.  I'm going through
the meta packages right now.  That gives it heating and cooling cycles
which helps heat up the thermal grease but doesn't heat it up for very
long periods.  Tomorrow maybe, I'll use stress to really heat it up.  30
minutes with all cores and threads should stir up something.  :/

My complaint, the temps sensors is reporting is way higher than my IR
thermometer says.  Even what I think is the ambient temp is way off. 
I've googled and others report the same thing.  During one compile, I
pointed the IR sensor right at the base of the CPU cooler.  It may not
be as hot as the CPU is but it is closer than anything else.  I measured
like 80F or something while sensors was reporting above 140F or so.  I
can see a little difference but not that much.  Besides, for the wattage

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread William Kenworthy



On 16/6/24 07:07, Mark Knecht wrote:


> I still don't understand the efi thing.  I'm booted up tho.  I'm happy.
> Now to get temp sensors and stuff to work.  I want to keep a eye on
> temps for a bit.  I think the boot media was reporting the wrong info.
> Even the ambient temp was to high for this cool room.  It showed like
> 100F or something when my A/C is set to 68F or so.  Plus, the side is
> off the case at times.  New battle.  ;-)
>
> Dale


Hi Dale,
   Congrats on getting your new machine working. I think you've received
a lot of good info on temperature effects but there is one thing I didn't
see anyone talking about so I'll mention it here. (Note - my career was
chip design in Silicon Valley so I'm speaking from experience in both
chips and PCs that use them.

   First, don't worry too much about high temperatures hurting your
processor or the chips in the system. They can stand up to 70C
pretty much forever and 100C for long periods of time. Long before
anything would get damaged at the chip level, if it ever gets damaged,
you are going to have timing problems that would either cause the
system to crash, corrupt data, or both, so temps are important
but it won't be damage to the processor. (Assuming it's a good
chip that meets all specs and is well tested which I'm sure yours
is.

   The thing I think you should be aware of is that long-term high
temps, while they don't hurt the processor, can very possibly degrade
the thermal paste that is between your processor or M.2 chips
and their heat sinks & fans. Thermal paste can and will degrade
of time and high temps make it degrade faster so the temps you
see today may not be the same as what you see 2 or 3 years from
now.

   Now, the fun part. I wrote you a little Python program which on
my system is called Dales_Loop.py. This program has 3
parameters - a value to count to, the number of cores to be used,
and a timeout value to stop the program. Using a program like
this can give you repeatable results. I use btop in a second
terminal to watch individual core temps As provided it will
loop 1,000,000 on 4 cores in parallel. When it finishes the
count it will start another process and count again. It will
do this for 30 seconds and then stop. When finished it will
tell you how many processes it ran over the complete test.

   If you wanted to do other things inside the loop, like floating
point math or things that would stress the machine in other
ways you can add that to the subroutine.

   Anyway, you can start with 4 cores, up the time value
to run the test longer, up the count value to run each
process longer, and most fun, raise the number of cores
to start using more of the processor. On my Ryzen 9
5950X, which is water cooled, I don't get much fan reaction
until I'm using 16 of the 32 threads.

   Best wishes for you and your new rig.

Cheers,
Mark



import multiprocessing
import time

def count_to_large_number(count_value):
    for i in range(count_value):
        pass  # Replace with your desired computation or task

def main():
    num_processes = 4
    count_value = 100
    runtime_seconds = 30

    processes = []
    start_time = time.time()
    total_processes_started = 0

    while time.time() - start_time < runtime_seconds:
        for process in processes:
            if not process.is_alive():
                processes.remove(process)

        while len(processes) < num_processes:
            process = 
multiprocessing.Process(target=count_to_large_number, args=(count_value,))

            processes.append(process)
            process.start()
            total_processes_started += 1

    for process in processes:
        process.join()

    print(f"Total processes started: {total_processes_started}")

if __name__ == "__main__":
    main()


or use app-benchmarks/stress

BillK





Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Mark Knecht

> I still don't understand the efi thing.  I'm booted up tho.  I'm happy.
> Now to get temp sensors and stuff to work.  I want to keep a eye on
> temps for a bit.  I think the boot media was reporting the wrong info.
> Even the ambient temp was to high for this cool room.  It showed like
> 100F or something when my A/C is set to 68F or so.  Plus, the side is
> off the case at times.  New battle.  ;-)
>
> Dale


Hi Dale,
   Congrats on getting your new machine working. I think you've received
a lot of good info on temperature effects but there is one thing I didn't
see anyone talking about so I'll mention it here. (Note - my career was
chip design in Silicon Valley so I'm speaking from experience in both
chips and PCs that use them.

   First, don't worry too much about high temperatures hurting your
processor or the chips in the system. They can stand up to 70C
pretty much forever and 100C for long periods of time. Long before
anything would get damaged at the chip level, if it ever gets damaged,
you are going to have timing problems that would either cause the
system to crash, corrupt data, or both, so temps are important
but it won't be damage to the processor. (Assuming it's a good
chip that meets all specs and is well tested which I'm sure yours
is.

   The thing I think you should be aware of is that long-term high
temps, while they don't hurt the processor, can very possibly degrade
the thermal paste that is between your processor or M.2 chips
and their heat sinks & fans. Thermal paste can and will degrade
of time and high temps make it degrade faster so the temps you
see today may not be the same as what you see 2 or 3 years from
now.

   Now, the fun part. I wrote you a little Python program which on
my system is called Dales_Loop.py. This program has 3
parameters - a value to count to, the number of cores to be used,
and a timeout value to stop the program. Using a program like
this can give you repeatable results. I use btop in a second
terminal to watch individual core temps. As provided it will
loop 1,000,000 on 4 cores in parallel. When it finishes the
count it will start another process and count again. It will
do this for 30 seconds and then stop. When finished it will
tell you how many processes it ran over the complete test.

   If you wanted to do other things inside the loop, like floating
point math or things that would stress the machine in other
ways you can add that to the subroutine.

   Anyway, you can start with 4 cores, up the time value
to run the test longer, up the count value to run each
process longer, and most fun, raise the number of cores
to start using more of the processor. On my Ryzen 9
5950X, which is water cooled, I don't get much fan reaction
until I'm using 16 of the 32 threads.

   Best wishes for you and your new rig.

Cheers,
Mark



import multiprocessing
import time

def count_to_large_number(count_value):
for i in range(count_value):
pass  # Replace with your desired computation or task

def main():
num_processes = 4
count_value = 100
runtime_seconds = 30

processes = []
start_time = time.time()
total_processes_started = 0

while time.time() - start_time < runtime_seconds:
for process in processes:
if not process.is_alive():
processes.remove(process)

while len(processes) < num_processes:
process = multiprocessing.Process(target=count_to_large_number,
args=(count_value,))
processes.append(process)
process.start()
total_processes_started += 1

for process in processes:
process.join()

print(f"Total processes started: {total_processes_started}")

if __name__ == "__main__":
main()


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Dale
Michael wrote:
> On Saturday, 15 June 2024 12:01:26 BST Dale wrote:
>> Michael wrote:
>>> b) Using a bootloader:
>>>
>>> Mount your ESP under the /efi mountpoint.  GRUB et al, will install their
>>> .efi image in the /efi/EFI/ directory.  You can have your /boot as a
>>> directory on your / partition, or on its own separate partition with a
>>> more robust fs type than ESP's FAT and your kernel images will be
>>> installed in there.
>> H.  If I have a separate /boot, then efi gets mounted under /boot? 
>> Like this:
>>
>> /boot/efi/
> "...  Mounting the ESP to /boot/efi/, as was traditionally done, is not 
> recommended."
>
> Please read:
>
> https://wiki.gentoo.org/wiki/EFI_System_Partition#Mount_point
>
>> I'd like to use Grub, it's what I'm used to mostly.  That way I can
>> update grub with its command and I guess update the efi thingy too when
>> I add kernels.  I'm not sure on that tho.  I could be wrong.
> You can still use GRUB.  Example:
> =
> EFI Partition:  /dev/nvme0n1p1 type ef00
>
> Partition GUID code: C12A7328-F81F-11D2-BA4B-00A0C93EC93B (EFI system 
> partition)
>
> Mountpoint: /efi
>
> Filesystem: FAT32
> =
>
> Then you can have a separate boot partition, example:
> =
> Boot Partition:  /dev/nvme0n1p2, type 8300
>
> Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
>
> Mountpoint: /boot
>
> Filesystem: ext2/3/4/xfs/btrfs/etc.
> ===
>
>
> [snip...]
>>> ~ # du -s -h /var
>>> 17G /var
>> Well, it is large but it should last me a long time.  Who knows what
>> portage will do next.  When the distfiles and such moved to /var, it's a
>> good thing I was on LVM.  This time, I'm not using LVM so gotta plan
>> further ahead. 
> You can use btrfs or zfs and have /root, /home, /var, /what-ever mounted in 
> subvolumes.  This way they will use/share the free space of the single top 
> level partition/disk.
>


Update.  I just followed the docs.  I had little idea of what I was
doing but I just did what it said.  Dang thing booted, first time.  Even
my fresh new kernel worked.  :-D :-D 

Since I was confused and thought /boot and efi could be the same
partition, I ended up with /boot on the root partition.  I may redo that
later.  It works tho.  Plus, loads of space for other images etc. 

I still don't understand the efi thing.  I'm booted up tho.  I'm happy. 
Now to get temp sensors and stuff to work.  I want to keep a eye on
temps for a bit.  I think the boot media was reporting the wrong info. 
Even the ambient temp was to high for this cool room.  It showed like
100F or something when my A/C is set to 68F or so.  Plus, the side is
off the case at times.  New battle.  ;-) 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Peter Humphrey
On Saturday, 15 June 2024 17:55:17 BST Michael wrote:

--->8

Thanks, but I'll stick to what I know if you don't mind.

-- 
Regards,
Peter.






Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Michael
On Saturday, 15 June 2024 16:28:29 BST Peter Humphrey wrote:
> On Saturday, 15 June 2024 13:01:33 BST Dale wrote:
> > Could you share the boot screen again?
> 
> New version attached...
> 
> > I used lilo ages ago then switched to Grub.  Grub is massive but it works
> > well enough.
> 
> ...as long as you only want to specify one kernel image.

Not really.  You can add as many kernel image versions as you like and in 
addition customise the GRUB configuration to add kernel command-line 
parameters of choice.


> > Thing is, seeing your screen may help me to understand how
> > certain options work. It may make me use the bootloader you use, which is
> > what by the way?
> 
> I use bootctl from sys-apps/systemd-utils* with USE="acl boot kernel-install
> kmod tmpfiles udev -secureboot (-selinux) (-split-usr) -sysusers -test
> -ukify"
> 
> Sys-kernel/kernelinstall has USE="-dracut (-efistub) -grub -refind -systemd
> -systemd-boot -uki -ukify"
> 
> Notice the -systemd -systemd-boot flags. The wiki and some news items say to
> set those, but then I'd end up with the unintelligible file and directory
> names I mentioned - I'm not well versed in mental 32-digit hex
> number-juggling.  :)
> 
> --->8

I thought the 'title' for any systemd-boot listed kernel can be edited to suit 
user preferences, as described here:

https://forums.gentoo.org/viewtopic-p-8826048.html


> > Like you, I keep old kernels around too.  Eventually, I clean out old
> > ones but I like to keep at least a couple around just in case one goes
> > wonky.  At least I can boot a older kernel to fix things.  I also do
> > things the manual way.  I copy my kernels over with names I like, copy
> > the config file over with a matching name as well.
> 
> That's all automatic here; kernelinstall does it when I call 'make install'.
> All but two of the files shown in 'ls -1 /boot', below, are as 'make
> install' set them. The exceptions to that are early_ucode.cpio and
> intel-uc.img.

You can build these in-kernel, by adding their path/name in your kernel 
config, e.g.:

CONFIG_FW_LOADER=y
CONFIG_EXTRA_FIRMWARE="intel-ucode/06-3c-03"
CONFIG_EXTRA_FIRMWARE_DIR="/lib/firmware"

See here:

https://wiki.gentoo.org/wiki/Intel_microcode#New_method_without_initram-fs.
2Fdisk_.28efistub_compatible.29


> The only files I maintain myself are those under /boot/loader/entries; they
> define the menu items in the attachment here:
> 
> 06-gentoo-rescue-6.6.30.conf
> 07-gentoo-rescue-6.6.30.nonet.conf
> 08-gentoo-rescue-6.6.21.conf
> 09-gentoo-rescue-6.6.21.nonet.conf
> 30-gentoo-6.6.30.conf
> 32-gentoo-6.6.30.nox.conf
> 34-gentoo-6.6.30.nonet.conf
> 40-gentoo-6.6.21.conf
> 42-gentoo-6.6.21.nox.conf
> 44-gentoo-6.6.21.nonet.conf
> 
> E.g: $ cat /boot/loader/entries/34-gentoo-6.6.30.nonet.conf
> title Gentoo 6.6.30 (No network)
> version 6.6.30-gentoo
> linux vmlinuz-6.6.30-gentoo
> options root=/dev/nvme0n1p5 net.ifnames=0 raid=noautodetect \
> softlevel=nonetwork
> 
> > I do let dracut build the init thingy.  I do edit the name to match the
> > kernel so that grub sees it.  So, you not alone doing it the manual way,
> > as
> > much as I can anyway.
> 
> I don't need an initramfs, other than early_ucode.cpio and intel-uc.img,
> because I don't want the complications of a separate /usr partition.
> 
> $ ls -1 /boot
> config-6.6.21-gentoo
> config-6.6.21-gentoo-rescue
> config-6.6.30-gentoo
> config-6.6.30-gentoo-rescue
> early_ucode.cpio
> EFI
> intel-uc.img
> loader
> System.map-6.6.21-gentoo
> System.map-6.6.21-gentoo-rescue
> System.map-6.6.30-gentoo
> System.map-6.6.30-gentoo-rescue
> vmlinuz-6.6.21-gentoo
> vmlinuz-6.6.21-gentoo-rescue
> vmlinuz-6.6.30-gentoo
> vmlinuz-6.6.30-gentoo-rescue
> 
> > Thanks much.
> 
> *   The handbook says to use efibootmgr to create boot entries, but that
> again assumes you only want a single bootable image. It is occasionally
> useful, though, to clear up any mess I may have made.

With the efibootmgr you can have more than one bootable image, set your own 
preferred label for it and you can hardcode any kernel command-line parameters 
in each kernel image; e.g.:

CONFIG_CMDLINE="root=PARTUUID=--XXX-- snd-hda-
intel.index=1,0"


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Michael
On Saturday, 15 June 2024 12:01:26 BST Dale wrote:
> Michael wrote:

> > b) Using a bootloader:
> > 
> > Mount your ESP under the /efi mountpoint.  GRUB et al, will install their
> > .efi image in the /efi/EFI/ directory.  You can have your /boot as a
> > directory on your / partition, or on its own separate partition with a
> > more robust fs type than ESP's FAT and your kernel images will be
> > installed in there.
> 
> H.  If I have a separate /boot, then efi gets mounted under /boot? 
> Like this:
> 
> /boot/efi/

"...  Mounting the ESP to /boot/efi/, as was traditionally done, is not 
recommended."

Please read:

https://wiki.gentoo.org/wiki/EFI_System_Partition#Mount_point

> I'd like to use Grub, it's what I'm used to mostly.  That way I can
> update grub with its command and I guess update the efi thingy too when
> I add kernels.  I'm not sure on that tho.  I could be wrong.

You can still use GRUB.  Example:
=
EFI Partition:  /dev/nvme0n1p1 type ef00

Partition GUID code: C12A7328-F81F-11D2-BA4B-00A0C93EC93B (EFI system 
partition)

Mountpoint: /efi

Filesystem: FAT32
=

Then you can have a separate boot partition, example:
=
Boot Partition:  /dev/nvme0n1p2, type 8300

Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)

Mountpoint: /boot

Filesystem: ext2/3/4/xfs/btrfs/etc.
===


[snip...]
> > ~ # du -s -h /var
> > 17G /var
> 
> Well, it is large but it should last me a long time.  Who knows what
> portage will do next.  When the distfiles and such moved to /var, it's a
> good thing I was on LVM.  This time, I'm not using LVM so gotta plan
> further ahead. 

You can use btrfs or zfs and have /root, /home, /var, /what-ever mounted in 
subvolumes.  This way they will use/share the free space of the single top 
level partition/disk.



signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Dale
Peter Humphrey wrote:
> On Saturday, 15 June 2024 07:53:06 BST Dale wrote:
>> Peter Humphrey wrote:
>>> Here's the output of parted -l on my main NVMe disk in case it helps:
>>>
>>> Model: Samsung SSD 970 EVO Plus 250GB (nvme)
>>> Disk /dev/nvme1n1: 250GB
>>> Sector size (logical/physical): 512B/512B
>>> Partition Table: gpt
>>> Disk Flags:
>>>
>>> Number  Start   End SizeFile system Name  Flags
>>>
>>>  1  1049kB  135MB   134MB
>>>  2  135MB   4296MB  4161MB  fat32   boot  boot, esp
>>>  3  4296MB  12.9GB  8590MB  linux-swap(v1)  swap1 swap
>>>  4  12.9GB  34.4GB  21.5GB  ext4rescue
>>>  5  34.4GB  60.1GB  25.8GB  ext4root
>>>  6  60.1GB  112GB   51.5GB  ext4var
>>>  7  112GB   114GB   2147MB  ext4local
>>>  8  114GB   140GB   25.8GB  ext4home
>>>  9  140GB   183GB   42.9GB  ext4common
>>>
>> I'm starting the process here.  I'm trying to follow the install guide
>> but this is still not clear to me and the guide is not helping.  In your
>> list above, is #2 where /boot is mounted?  Is that where I put kernels,
>> init thingys, memtest and other images to boot from? 
> Yes, and yes ('tree -L 3 /boot' below). I've had no success with the layout 
> recommended in the wiki, because I want a choice of kernels to boot; I've 
> shown my boot-time screen here before. In fact, gparted shows the unformatted 
> first partition as bios_grub. I don't know why parted didn't show the same 
> (it 
> does show it now) Gparted screen shot attached.
>
> Thus, I have an unused bios_grub partition, then a FAT32 EFI system 
> partition, 
> then the rest as usual.


Could you share the boot screen again?  I used lilo ages ago then
switched to Grub.  Grub is massive but it works well enough.  Thing is,
seeing your screen may help me to understand how certain options work. 
It may make me use the bootloader you use, which is what by the way? 

>> My current layout for a 1TB m.2 stick, typing by hand:
>>
>> 18GBEFI System   
>> 2400GBLinux file system for root or /.
>> 3180GBLinux file system for /var.
>>
>> I'll have /home and such on other drives, spinning rust. I'm just
>> wanting to be sure if my #1 and your #2 is where boot files go, Grub,
>> kernels, init thingys etc.  I've always had kernels and such on ext2 but
>> understand efi requires fat32. 
> Yes. I believe the EFI spec requires a file system that any OS can access, 
> and 
> FAT is it, FAT32 usually being recommended.
>
> Then, when it comes to bootctl and installkernel, I ignore the Gentoo advice 
> on USE flags because it results in illegible file names and impenetrable 
> directories. My version is far simpler to manage, which I do by hand. I don't 
> suppose anyone else would use my approach, but I started it long before the 
> days of EFI, and it still works for me.
>
> Also, as I've said here before, I dislike the all-things-to-all-men grub, so 
> I 
> don't use it.
>
> Incidentally, do you really need so much space in root and /var? Mine are 
> just 
> 40GB each, and not even half full. I don't run a lot of media apps though. 
> Still, space is cheap.   :)
>
> $ tree -L 3 /boot
> /boot
> ├── config-6.1.67-gentoo-rescue
> ├── config-6.6.21-gentoo
> ├── config-6.6.21-gentoo-rescue
> ├── config-6.6.30-gentoo
> ├── config-6.6.30-gentoo-rescue
> ├── config-6.7.9-gentoo
> ├── config-6.8.5-gentoo-r1
> ├── early_ucode.cpio
> ├── EFI
> │   ├── BOOT
> │   │   └── BOOTX64.EFI
> │   ├── Linux
> │   └── systemd
> │   └── systemd-bootx64.efi
> ├── intel-uc.img
> ├── loader
> │   ├── entries
> │   │   ├── 06-gentoo-rescue-6.6.30.conf
> │   │   ├── 07-gentoo-rescue-6.6.30.nonet.conf
> │   │   ├── 08-gentoo-rescue-6.6.21.conf
> │   │   ├── 09-gentoo-rescue-6.6.21.nonet.conf
> │   │   ├── 30-gentoo-6.6.30.conf
> │   │   ├── 32-gentoo-6.6.30.conf
> │   │   ├── 34-gentoo-6.6.30.conf
> │   │   ├── 40-gentoo-6.6.21.conf
> │   │   ├── 42-gentoo-6.6.21.conf
> │   │   └── 44-gentoo-6.6.21.conf
> │   ├── entries.srel
> │   ├── loader.conf
> │   └── random-seed
> ├── System.map-6.6.21-gentoo
> ├── System.map-6.6.21-gentoo-rescue
> ├── System.map-6.6.30-gentoo
> ├── System.map-6.6.30-gentoo-rescue
> ├── System.map-6.7.9-gentoo
> ├── System.map-6.8.5-gentoo-r1
> ├── vmlinuz-6.1.67-gentoo-rescue
> ├── vmlinuz-6.6.21-gentoo
> ├── vmlinuz-6.6.21-gentoo-rescue
> ├── vmlinuz-6.6.30-gentoo
> ├── vmlinuz-6.6.30-gentoo-rescue
> ├── vmlinuz-6.7.9-gentoo
> └── vmlinuz-6.8.5-gentoo-r1
>


OK.  So, I mount /boot and put my kernels, config files and all the
things I usually put there as usual.  Then I have a efi directory under
/boot that the efi tools use.  I only created one partition so I assume
when I boot it will find the efi directory within /boot???  If this is
the case, I don't need to redo my partitions.  I already started a OS
backup but oh well. ;-) 

Like you, I keep old kernels around too.  Eventually, I clean out old

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Peter Humphrey
On Saturday, 15 June 2024 07:53:06 BST Dale wrote:
> Peter Humphrey wrote:
> > Here's the output of parted -l on my main NVMe disk in case it helps:
> > 
> > Model: Samsung SSD 970 EVO Plus 250GB (nvme)
> > Disk /dev/nvme1n1: 250GB
> > Sector size (logical/physical): 512B/512B
> > Partition Table: gpt
> > Disk Flags:
> > 
> > Number  Start   End SizeFile system Name  Flags
> > 
> >  1  1049kB  135MB   134MB
> >  2  135MB   4296MB  4161MB  fat32   boot  boot, esp
> >  3  4296MB  12.9GB  8590MB  linux-swap(v1)  swap1 swap
> >  4  12.9GB  34.4GB  21.5GB  ext4rescue
> >  5  34.4GB  60.1GB  25.8GB  ext4root
> >  6  60.1GB  112GB   51.5GB  ext4var
> >  7  112GB   114GB   2147MB  ext4local
> >  8  114GB   140GB   25.8GB  ext4home
> >  9  140GB   183GB   42.9GB  ext4common
> > 
> I'm starting the process here.  I'm trying to follow the install guide
> but this is still not clear to me and the guide is not helping.  In your
> list above, is #2 where /boot is mounted?  Is that where I put kernels,
> init thingys, memtest and other images to boot from? 

Yes, and yes ('tree -L 3 /boot' below). I've had no success with the layout 
recommended in the wiki, because I want a choice of kernels to boot; I've 
shown my boot-time screen here before. In fact, gparted shows the unformatted 
first partition as bios_grub. I don't know why parted didn't show the same (it 
does show it now) Gparted screen shot attached.

Thus, I have an unused bios_grub partition, then a FAT32 EFI system partition, 
then the rest as usual.

> My current layout for a 1TB m.2 stick, typing by hand:
> 
> 18GBEFI System   
> 2400GBLinux file system for root or /.
> 3180GBLinux file system for /var.
> 
> I'll have /home and such on other drives, spinning rust. I'm just
> wanting to be sure if my #1 and your #2 is where boot files go, Grub,
> kernels, init thingys etc.  I've always had kernels and such on ext2 but
> understand efi requires fat32. 

Yes. I believe the EFI spec requires a file system that any OS can access, and 
FAT is it, FAT32 usually being recommended.

Then, when it comes to bootctl and installkernel, I ignore the Gentoo advice 
on USE flags because it results in illegible file names and impenetrable 
directories. My version is far simpler to manage, which I do by hand. I don't 
suppose anyone else would use my approach, but I started it long before the 
days of EFI, and it still works for me.

Also, as I've said here before, I dislike the all-things-to-all-men grub, so I 
don't use it.

Incidentally, do you really need so much space in root and /var? Mine are just 
40GB each, and not even half full. I don't run a lot of media apps though. 
Still, space is cheap.   :)

$ tree -L 3 /boot
/boot
├── config-6.1.67-gentoo-rescue
├── config-6.6.21-gentoo
├── config-6.6.21-gentoo-rescue
├── config-6.6.30-gentoo
├── config-6.6.30-gentoo-rescue
├── config-6.7.9-gentoo
├── config-6.8.5-gentoo-r1
├── early_ucode.cpio
├── EFI
│   ├── BOOT
│   │   └── BOOTX64.EFI
│   ├── Linux
│   └── systemd
│   └── systemd-bootx64.efi
├── intel-uc.img
├── loader
│   ├── entries
│   │   ├── 06-gentoo-rescue-6.6.30.conf
│   │   ├── 07-gentoo-rescue-6.6.30.nonet.conf
│   │   ├── 08-gentoo-rescue-6.6.21.conf
│   │   ├── 09-gentoo-rescue-6.6.21.nonet.conf
│   │   ├── 30-gentoo-6.6.30.conf
│   │   ├── 32-gentoo-6.6.30.conf
│   │   ├── 34-gentoo-6.6.30.conf
│   │   ├── 40-gentoo-6.6.21.conf
│   │   ├── 42-gentoo-6.6.21.conf
│   │   └── 44-gentoo-6.6.21.conf
│   ├── entries.srel
│   ├── loader.conf
│   └── random-seed
├── System.map-6.6.21-gentoo
├── System.map-6.6.21-gentoo-rescue
├── System.map-6.6.30-gentoo
├── System.map-6.6.30-gentoo-rescue
├── System.map-6.7.9-gentoo
├── System.map-6.8.5-gentoo-r1
├── vmlinuz-6.1.67-gentoo-rescue
├── vmlinuz-6.6.21-gentoo
├── vmlinuz-6.6.21-gentoo-rescue
├── vmlinuz-6.6.30-gentoo
├── vmlinuz-6.6.30-gentoo-rescue
├── vmlinuz-6.7.9-gentoo
└── vmlinuz-6.8.5-gentoo-r1

-- 
Regards,
Peter.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Dale
Michael wrote:
> On Saturday, 15 June 2024 07:53:06 BST Dale wrote:
>> Peter Humphrey wrote:
>>> On Sunday, 2 June 2024 16:11:38 BST Dale wrote:
 My plan, given it is a 1TB, use maybe 300GBs of it.  Leave the rest
 blank.  Have the /boot, EFI directory, root and maybe put /var on a
 separate partition.  I figure for the boot stuff, 3GBs would be plenty
 for all combined.  Make them large so they can grow.  Make root, which
 would include /usr, say 150GBs.  /var can be around 10GBs.  My current
 OS is on a 160GB drive.  I wish I could get the nerve up to use LVM on
 everything except the boot stuff, /boot and the EFI stuff.  If I make
 them like above, I should be good for a long time.  Could go much larger
 tho.  Could use maybe 700GBs of it.  I assume it would use the unused
 part if needed.  I still don't know a lot about those things.  Mostly
 what I see posted on this list really.
>>> Doesn't everyone mount /tmp and /var/tmp/portage on tmpfs these days? I
>>> use
>>> hard disk for a few large packages, but I'm not convinced it's needed -
>>> except when running an emerge -e, that is, when they can get in the way
>>> of lots of others. That's why, some months ago, I suggested introducing
>>> an ability to mark some packages for compilation solitarily. (Is that a
>>> word?)
>>>
>>> Here's the output of parted -l on my main NVMe disk in case it helps:
>>>
>>> Model: Samsung SSD 970 EVO Plus 250GB (nvme)
>>> Disk /dev/nvme1n1: 250GB
>>> Sector size (logical/physical): 512B/512B
>>> Partition Table: gpt
>>> Disk Flags:
>>>
>>> Number  Start   End SizeFile system Name  Flags
>>>
>>>  1  1049kB  135MB   134MB
>>>  2  135MB   4296MB  4161MB  fat32   boot  boot, esp
>>>  3  4296MB  12.9GB  8590MB  linux-swap(v1)  swap1 swap
>>>  4  12.9GB  34.4GB  21.5GB  ext4rescue
>>>  5  34.4GB  60.1GB  25.8GB  ext4root
>>>  6  60.1GB  112GB   51.5GB  ext4var
>>>  7  112GB   114GB   2147MB  ext4local
>>>  8  114GB   140GB   25.8GB  ext4home
>>>  9  140GB   183GB   42.9GB  ext4common
>>>
>>> The common partition is mounted under my home directory, to keep
>>> everything
>>> I'd want to preserve if I made myself a new user account. It's v. useful,
>>> too.
>> I'm starting the process here.  I'm trying to follow the install guide
>> but this is still not clear to me and the guide is not helping.  In your
>> list above, is #2 where /boot is mounted?  Is that where I put kernels,
>> init thingys, memtest and other images to boot from? 
> I'm simplifying this to keep it short, but you can understand the UEFI MoBo 
> firmware to be no different than the legacy CMOS code on your old MoBo, 
> except 
> upgraded, much larger and more powerful in its capabilities.  In particular, 
> it can access, load and execute directly specially structured executables, 
> stored as *.efi files on a GPT formatted disk, in the EFI System Partition 
> (ESP).  The ESP should be formatted as FAT32 and contain a directory named 
> EFI 
> in its top level, where any .EFI executables should be stored.  The ESP does 
> not have to be the first partition on the disk, the UEFI firmware will scan 
> and find .efi files in whichever FAT32 partition they are stored.
>
> NOTES: 
>
> 1. Some MoBo's UEFI firmware offer a 'Compatibility Support Module' (CSM) 
> setting, which if enabled will allow disks with a DOS partition table and a 
> boot loader in the MBR to be booted by the UEFI MoBo.  Since you are starting 
> from scratch and you're not installing Windows 98 there is no reason to have 
> this feature enabled.  I suggest you follow the handbook and use a GPT 
> partitioned disk with an ESP installed boot loader.

Left CSM cut off, didn't know what it was anyway.  LOL  Using GPT,
prefer to use GPT on everything just to be consistent.

> 2. The UEFI firmware is capable of loading Linux kernels directly without a 
> 3rd party boot loader, as long as the kernels have been created to include 
> the 
> 'EFI stub'.  This makes the kernel image executable by the UEFI firmware, 
> without the intervention of a boot loader/manager like GRUB:
>
> https://wiki.gentoo.org/wiki/EFI_stub
>
> Multibooting of different OSs or kernels can be managed thereafter by using 
> the CLI tool efibootmgr, or the UEFI boot menu (by pressing F2, DEL, or some 
> such key during POST).
>
> https://wiki.gentoo.org/wiki/Efibootmgr
>
> Given the above you broadly have the following choices:
>
> a) Simplest, direct, without a 3rd party bootloader:
>
> Mount your ESP under the /boot mountpoint and drop your kernel .efi images in 
> there, under the /boot/EFI/ directory, then boot them directly using the UEFI 
> firmware or efibootmgr to switch between them.
>
> b) Using a bootloader:
>
> Mount your ESP under the /efi mountpoint.  GRUB et al, will install their 
> .efi 
> image in the /efi/EFI/ directory.  You 

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Michael
On Saturday, 15 June 2024 07:53:06 BST Dale wrote:
> Peter Humphrey wrote:
> > On Sunday, 2 June 2024 16:11:38 BST Dale wrote:
> >> My plan, given it is a 1TB, use maybe 300GBs of it.  Leave the rest
> >> blank.  Have the /boot, EFI directory, root and maybe put /var on a
> >> separate partition.  I figure for the boot stuff, 3GBs would be plenty
> >> for all combined.  Make them large so they can grow.  Make root, which
> >> would include /usr, say 150GBs.  /var can be around 10GBs.  My current
> >> OS is on a 160GB drive.  I wish I could get the nerve up to use LVM on
> >> everything except the boot stuff, /boot and the EFI stuff.  If I make
> >> them like above, I should be good for a long time.  Could go much larger
> >> tho.  Could use maybe 700GBs of it.  I assume it would use the unused
> >> part if needed.  I still don't know a lot about those things.  Mostly
> >> what I see posted on this list really.
> > 
> > Doesn't everyone mount /tmp and /var/tmp/portage on tmpfs these days? I
> > use
> > hard disk for a few large packages, but I'm not convinced it's needed -
> > except when running an emerge -e, that is, when they can get in the way
> > of lots of others. That's why, some months ago, I suggested introducing
> > an ability to mark some packages for compilation solitarily. (Is that a
> > word?)
> > 
> > Here's the output of parted -l on my main NVMe disk in case it helps:
> > 
> > Model: Samsung SSD 970 EVO Plus 250GB (nvme)
> > Disk /dev/nvme1n1: 250GB
> > Sector size (logical/physical): 512B/512B
> > Partition Table: gpt
> > Disk Flags:
> > 
> > Number  Start   End SizeFile system Name  Flags
> > 
> >  1  1049kB  135MB   134MB
> >  2  135MB   4296MB  4161MB  fat32   boot  boot, esp
> >  3  4296MB  12.9GB  8590MB  linux-swap(v1)  swap1 swap
> >  4  12.9GB  34.4GB  21.5GB  ext4rescue
> >  5  34.4GB  60.1GB  25.8GB  ext4root
> >  6  60.1GB  112GB   51.5GB  ext4var
> >  7  112GB   114GB   2147MB  ext4local
> >  8  114GB   140GB   25.8GB  ext4home
> >  9  140GB   183GB   42.9GB  ext4common
> > 
> > The common partition is mounted under my home directory, to keep
> > everything
> > I'd want to preserve if I made myself a new user account. It's v. useful,
> > too.
> I'm starting the process here.  I'm trying to follow the install guide
> but this is still not clear to me and the guide is not helping.  In your
> list above, is #2 where /boot is mounted?  Is that where I put kernels,
> init thingys, memtest and other images to boot from? 

I'm simplifying this to keep it short, but you can understand the UEFI MoBo 
firmware to be no different than the legacy CMOS code on your old MoBo, except 
upgraded, much larger and more powerful in its capabilities.  In particular, 
it can access, load and execute directly specially structured executables, 
stored as *.efi files on a GPT formatted disk, in the EFI System Partition 
(ESP).  The ESP should be formatted as FAT32 and contain a directory named EFI 
in its top level, where any .EFI executables should be stored.  The ESP does 
not have to be the first partition on the disk, the UEFI firmware will scan 
and find .efi files in whichever FAT32 partition they are stored.

NOTES: 

1. Some MoBo's UEFI firmware offer a 'Compatibility Support Module' (CSM) 
setting, which if enabled will allow disks with a DOS partition table and a 
boot loader in the MBR to be booted by the UEFI MoBo.  Since you are starting 
from scratch and you're not installing Windows 98 there is no reason to have 
this feature enabled.  I suggest you follow the handbook and use a GPT 
partitioned disk with an ESP installed boot loader.

2. The UEFI firmware is capable of loading Linux kernels directly without a 
3rd party boot loader, as long as the kernels have been created to include the 
'EFI stub'.  This makes the kernel image executable by the UEFI firmware, 
without the intervention of a boot loader/manager like GRUB:

https://wiki.gentoo.org/wiki/EFI_stub

Multibooting of different OSs or kernels can be managed thereafter by using 
the CLI tool efibootmgr, or the UEFI boot menu (by pressing F2, DEL, or some 
such key during POST).

https://wiki.gentoo.org/wiki/Efibootmgr

Given the above you broadly have the following choices:

a) Simplest, direct, without a 3rd party bootloader:

Mount your ESP under the /boot mountpoint and drop your kernel .efi images in 
there, under the /boot/EFI/ directory, then boot them directly using the UEFI 
firmware or efibootmgr to switch between them.

b) Using a bootloader:

Mount your ESP under the /efi mountpoint.  GRUB et al, will install their .efi 
image in the /efi/EFI/ directory.  You can have your /boot as a directory on 
your / partition, or on its own separate partition with a more robust fs type 
than ESP's FAT and your kernel images will be installed in there.

c) For systemd you can mount your ESP 

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-15 Thread Dale
Peter Humphrey wrote:
> On Sunday, 2 June 2024 16:11:38 BST Dale wrote:
>
>> My plan, given it is a 1TB, use maybe 300GBs of it.  Leave the rest
>> blank.  Have the /boot, EFI directory, root and maybe put /var on a
>> separate partition.  I figure for the boot stuff, 3GBs would be plenty
>> for all combined.  Make them large so they can grow.  Make root, which
>> would include /usr, say 150GBs.  /var can be around 10GBs.  My current
>> OS is on a 160GB drive.  I wish I could get the nerve up to use LVM on
>> everything except the boot stuff, /boot and the EFI stuff.  If I make
>> them like above, I should be good for a long time.  Could go much larger
>> tho.  Could use maybe 700GBs of it.  I assume it would use the unused
>> part if needed.  I still don't know a lot about those things.  Mostly
>> what I see posted on this list really. 
> Doesn't everyone mount /tmp and /var/tmp/portage on tmpfs these days? I use 
> hard disk for a few large packages, but I'm not convinced it's needed - 
> except 
> when running an emerge -e, that is, when they can get in the way of lots of 
> others. That's why, some months ago, I suggested introducing an ability to 
> mark some packages for compilation solitarily. (Is that a word?)
>
> Here's the output of parted -l on my main NVMe disk in case it helps:
>
> Model: Samsung SSD 970 EVO Plus 250GB (nvme)
> Disk /dev/nvme1n1: 250GB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> Disk Flags: 
>
> Number  Start   End SizeFile system Name  Flags
>  1  1049kB  135MB   134MB
>  2  135MB   4296MB  4161MB  fat32   boot  boot, esp
>  3  4296MB  12.9GB  8590MB  linux-swap(v1)  swap1 swap
>  4  12.9GB  34.4GB  21.5GB  ext4rescue
>  5  34.4GB  60.1GB  25.8GB  ext4root
>  6  60.1GB  112GB   51.5GB  ext4var
>  7  112GB   114GB   2147MB  ext4local
>  8  114GB   140GB   25.8GB  ext4home
>  9  140GB   183GB   42.9GB  ext4common
>
> The common partition is mounted under my home directory, to keep everything 
> I'd want to preserve if I made myself a new user account. It's v. useful, too.


I'm starting the process here.  I'm trying to follow the install guide
but this is still not clear to me and the guide is not helping.  In your
list above, is #2 where /boot is mounted?  Is that where I put kernels,
init thingys, memtest and other images to boot from? 

My current layout for a 1TB m.2 stick, typing by hand:

1    8GB        EFI System   
2    400GB    Linux file system for root or /.
3    180GB    Linux file system for /var.

I'll have /home and such on other drives, spinning rust. I'm just
wanting to be sure if my #1 and your #2 is where boot files go, Grub,
kernels, init thingys etc.  I've always had kernels and such on ext2 but
understand efi requires fat32. 

Thanks.

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-14 Thread Dale
Dale wrote:
> Howdy, again,
>
> <<< SNIP >>>
>
> Dale
>
> :-)  :-) 
>

Update, number 1.  The CPU finally came in.  It was supposed to be here
Monday, finally left the hub on Wednesday morning, went to the wrong
post office.  This morning, it finally made it to the right post office
and arrived in my mailbox this afternoon.  I might add, some other
things came in too.  I got the thermal pads and a couple more of my
favorite vintage soup spoons.  O_O 

Anyway, installed a couple more 140mm case fans, also added a .5mm thick
thermal pad to the m.2 heatsink so that it makes better contact.  Then I
figured out how to install the CPU cooler base and installed the CPU and
cooler.  I then rechecked all the power and fan connectors.  I dug out
the always handy Ventoy USB stick, thanks again for pointing that thing
out to me ages ago, and I proceeded to my first boot up.  I hit the
power button and watched for smoke at first, with a finger on the power
supply power button, and then watched the fans spin up, 2 CPU fans and 6
case fans.  All good.  Then came the very tiny beep.  A couple seconds
later, the screen popped on and said I had a new CPU installed.  Well
duh!!!  To be honest, I reset to defaults, restarted, then updated the
BIOS.  So far, very good.  No smoke or fire and everything is working. 

Once all that was done, I booted the Ventoy stick and worked my way to a
memtest tool that works with EFI thingy.  It is currently at 16% of
64GBs and testing pretty fast.  Keep in mind, first new rig I built in a
decade, first ASUS mobo, first time using efi thingy and likely other
firsts I don't know about.  This reminds me of the sub-thread
discussion.  Every transistor on the CPU has to work, every chip on mobo
has to work, everything connected to it has to work.  And it all worked,
first time.  The old saying about a wink link in a chain comes to mind. 
This chain has a LOT of links.  They all seem to be strong.

The memtest is at 42% now.  Zero errors. 

Oh, I ordered a 4 port video card.  I found a comparison site that
compares video card abilities and speed.  It is about 150% faster than
current card in my main rig.  It's a Quadro P1000.  It's not a lot but
it is more than enough for me.  It's not here yet so I'm using one of a
four pack I bought when I started the NAS box project.  It has four
ports but is PCIe V2.  New card is PCIe V3.  Any newer, gets expensive
and a whole lot more than I need.  The ports is the big thing. 

Will update as things progress.  I suspect the efi thingy is going to
get interesting.  o_O  A link to pictures will come later.  Oh, the
memory sticks put on a light show.  What is up with that???  The little
m.2 cooler looks really cute.  Pretty much adorable.  LOL  I found one
that looks almost the same but taller and has two heat pipes instead of
one like mine.  Kinda pricey tho.  Must be for really large capacity m.2
sticks.  Some are quite large, and expensive too. 

Later.

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-10 Thread Dale
Frank Steinmetzger wrote:
> Am Sun, Jun 09, 2024 at 05:13:46PM -0500 schrieb Dale:
>
>> Frank Steinmetzger wrote:
>>> Am Samstag, 8. Juni 2024, 19:46:05 MESZ schrieb Dale:
>>>
>> Sadly, the CPU I got is for processing only, no video support it says.
> So you got an F model?
 I got the X model.  It's supposed to be a wttle bit faster.  o_O
>>> Well as we have been mentioning several times by now: starting with AM5 
>>> CPUs, 
>>> the X models all have integrated graphics. Where does what say “no video 
>>> support” and in which context?
>>>
>>
>> I found it on the AMD website.  The CPU I got is a AM4.  Linky.
>>
>> https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ryzen-7-5800x.html
>>
>> From that:
>>
>> Graphics Model        Discrete Graphics Card Required
>>
>> I think the G model has graphics, but gives up a little speed.  Or as I
>> put it above, a wiiile bit.  LOL 
> Oky, are you sure? The 5800 is an AM4 CPU. Up until now in this thread, 
> you were talking of AM5 (i.e. 7000-series CPU with a 600-series Mobo 
> chipset and DDR5). The 5800X does not fit into this at all. Just sayin’.
>


The build with the AM5 mobo took a backseat.  The lack of PCIe slots
just kept buggin me until I decided I didn't want to go that route. 
Basically, it didn't suite my needs without jumping through some hoops
that I don't like.  Plus, with my health, I can't jump through hoops. 
:/  So, I went with a AM4 setup that has more PCIe slots and suites my
needs better, except being a little slower CPU wise.  It will also make
a better NAS box later on and will likely be good enough until it blows
smoke.  It doesn't take a whole lot to move files from drives to network.

So, AM5 out, AM4 in.  Parts on the way, some already here.  As expected,
the State USPS hub still has a LOT of the stuff that was supposed to
come in today.  If they don't move tomorrow, I'm going to squeak loudly,
so that I get the grease.  The squeaky wheel thing.  UPS didn't let me
down tho.  I got the mobo and memory today.  I installed the memory and
m.2 stick thingy on the mobo.  No heatsink for the m.2 yet so when that
comes in, off it comes to install that.  I still can't get over the size
of the m.2 stick.  It looks like a long postage stamp. 

Oh, it has some nice PCIe slots.  They mostly X1 with all the
bifurcation thingy but still, PCIe slots. 

This thread went sideways when I changed to AM4. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-10 Thread Frank Steinmetzger
Am Sun, Jun 09, 2024 at 05:13:46PM -0500 schrieb Dale:

> Frank Steinmetzger wrote:
> > Am Samstag, 8. Juni 2024, 19:46:05 MESZ schrieb Dale:
> >
>  Sadly, the CPU I got is for processing only, no video support it says.
> >>> So you got an F model?
> >> I got the X model.  It's supposed to be a wttle bit faster.  o_O
> > Well as we have been mentioning several times by now: starting with AM5 
> > CPUs, 
> > the X models all have integrated graphics. Where does what say “no video 
> > support” and in which context?
> >
> 
> 
> I found it on the AMD website.  The CPU I got is a AM4.  Linky.
> 
> https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ryzen-7-5800x.html
> 
> From that:
> 
> Graphics Model        Discrete Graphics Card Required
> 
> I think the G model has graphics, but gives up a little speed.  Or as I
> put it above, a wiiile bit.  LOL 

Oky, are you sure? The 5800 is an AM4 CPU. Up until now in this thread, 
you were talking of AM5 (i.e. 7000-series CPU with a 600-series Mobo 
chipset and DDR5). The 5800X does not fit into this at all. Just sayin’.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Sugar is what gives coffee a sour taste, if you forget to put it in.


signature.asc
Description: PGP signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-10 Thread William Kenworthy


On 10/6/24 18:03, Dale wrote:

...

Interesting.  I thought the four port card in the NAS box was newer, at
least a little bit anyway.  I may dig around for a card with display
port outputs and see what I can find.  Hopefully something not to old.
I don't need much.  Biggest thing, drivers that work.  ;-)

Dale

:-)  :-)




Be sure to read up on displayports first - multiple monitors are daisy 
chained. Some monitors have two displayports to allow for daisychaining:
PC->DP1 on monitor1, Monitor1 DP2-> Monitor2 DP1. - Not sure what you 
have but my monitors only have a single DP which is often the case for 
early generations (if they have them at all) If you don't have such a 
monitor, you can buy a powered splitter with 2 or 3 ports. This what I 
use: DP to splitter to two monitors with the on board HDMI port to the 
third.  X works ok but I do have a few glitches at times with one 
monitor going black until I restart X and an occasional video pipeline 
error requiring a restart of the system to fix.  This with an intel 
video on board system.


And not to forget, monitors are power hungry - my system is currently 
using ~130w of power to type this email - the PC (low power celeron) 
accounts for less than 15w of that :)


BillK



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-10 Thread Dale
Michael wrote:
> On Monday, 10 June 2024 05:26:49 BST Dale wrote:
>
>> My main rig that I'm currently typing on has this:
>>
>> 01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX
>> 650]
>>
>> It has so far served me well for my monitor and TV.  I think the NVS
>> above is more powerful than the GTX.  It's hard for me to know which is
>> better since it is like comparing apples to oranges when looking at
>> model numbers.  I'm pretty sure the NVS is a good bit newer which
>> usually means faster. 
> Your GTX 650 has twice as many CUDA cores, faster base clock, close to 3 
> times 
> higher memory bandwidth, GDDR5 instead of DDR3 and it supports PCIe 3.0 bus, 
> Vs PCIe 2.0 of your NVS.  The GeForce is a better card in terms of 
> performance.
>
> I haven't owned either, but the spec of your main card reads as a better 
> option.


Interesting.  I thought the four port card in the NAS box was newer, at
least a little bit anyway.  I may dig around for a card with display
port outputs and see what I can find.  Hopefully something not to old. 
I don't need much.  Biggest thing, drivers that work.  ;-)

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-10 Thread Michael
On Monday, 10 June 2024 05:26:49 BST Dale wrote:

> My main rig that I'm currently typing on has this:
> 
> 01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX
> 650]
> 
> It has so far served me well for my monitor and TV.  I think the NVS
> above is more powerful than the GTX.  It's hard for me to know which is
> better since it is like comparing apples to oranges when looking at
> model numbers.  I'm pretty sure the NVS is a good bit newer which
> usually means faster. 

Your GTX 650 has twice as many CUDA cores, faster base clock, close to 3 times 
higher memory bandwidth, GDDR5 instead of DDR3 and it supports PCIe 3.0 bus, 
Vs PCIe 2.0 of your NVS.  The GeForce is a better card in terms of 
performance.

I haven't owned either, but the spec of your main card reads as a better 
option.


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Dale
Rich Freeman wrote:
> On Sun, Jun 9, 2024 at 6:13 PM Dale  wrote:
>> I think the G model has graphics, but gives up a little speed.  Or as I
>> put it above, a wiiile bit.  LOL
>>
> I'll be honest - you should probably think about how important multi
> monitors are as a priority - is this a desktop or a server?
>
> Also, if you REALLY need the extra monitors, I'd consider sticking the
> video card in a smaller slot and using the faster one for IO/etc.  If
> you're just doing web browsing or whatever you can get by fine with a
> 1x slot for your GPU.  Just don't try to do 3D graphics that way
> (video/2D will be fine most likely - not GPU encoding though - not
> that your cheap GPU will get you much of that).
>
> You might get better results if you dedicate things more to storage vs
> compute vs desktop.  Sure, it is more hardware, but you probably won't
> have to spend as much on any of them individually.  For example, a box
> that is JUST doing storage won't really become obsolete anytime soon -
> you'll literally be able to run it until it breaks, or you need more
> drives.  You can also tailor the RAM to whatever it actually needs
> (usually not much unless you're running Ceph or really want good cache
> performance).The compute box will always be a slave to moore's
> law, but you won't care about how many PCIe slots it has, so you can
> just spend on the CPU+ram.  The desktop for non-gaming could be a
> really cheap small quiet box that actually doesn't take up much space
> on your desk - probably something off the shelf.  Those name-brand
> desktops can be really handy as they'll run off of USB-C, support
> things like USB-C video, and so on.
>


On my main rig, I need at least two monitors.  One for web browsing,
email and other stuff and one for watching TV.  I may add another
monitor to that, at least power it up when I need to manage a lot of
files.  It may not run full time, at first anyway.  It's one reason on
this different build, I plan to use a four port video card.  I'll still
have one port left over.  The video card, it isn't much by modern
standards.  I booted the NAS box up since it has one of these cards in
it but one just like it will likely go in the new build.  From lspci:

01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [NVS 510]

Of course someone pointed out that a display port thingy on the mobo can
be turned into two outputs, I think.  I didn't know that before.  Your
idea tho, having a PCIe v4 video card or plugging the card I have into a
X4 slot might work, if the card allows that.  As it is, the faster
drives I need will be connected to the mobo ports.  Drives that can be
slower, such as for torrent sharing, will be connected to the little
SATA cards.  My average upload is about 13TBs a month.  That's a lot of
sharing.  I'm not to concerned with the speed of the torrent sharing as
it works well enough now.  Back to video, current card is v2.  If I
found a card that was v4 or so then it would likely be as fast or faster
than my current v2 card even if in a X4 slot.  I don't play games. 
Likely the biggest load is my TV.  A lot of videos are 1080p, some are
720p.  Some are so old, they are less than that and not HD at all. 

My main rig that I'm currently typing on has this:

01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX
650]

It has so far served me well for my monitor and TV.  I think the NVS
above is more powerful than the GTX.  It's hard for me to know which is
better since it is like comparing apples to oranges when looking at
model numbers.  I'm pretty sure the NVS is a good bit newer which
usually means faster. 

At some point, the mobo that is on the way will be the drive box, NAS if
you will.  Then a newer build will be the new main rig.  I'm hopeful
that slots and such can be sorted out by then.  Maybe some new way to
connect a lot of drives will appear.  Hopefully not USB.  My current rig
will be a spare or something.  After all, it does work well enough. 
Could even be a NAS box that runs 24/7 and serves up my TV video. 
Maybe.  I dunno yet. 

I'm getting a faster CPU, more memory and still have a few PCIe slots. 
It's a start.  :/  I hope I kept the version and slot width straight up
there.  It's kinda hard to keep track.

Dale

:-)  :-) 

P. S.  Stuff still in State USPS hub.  Hasn't moved yet.  May only have
mobo and memory tomorrow and may have to be the squeaky wheel for rest
of stuff, to get it moving.  That means no CPU either.  Hey, I got the
little m.2 stick.  I also have a cute little cooler for the m.2 stick on
the way.  :-D 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Rich Freeman
On Sun, Jun 9, 2024 at 6:13 PM Dale  wrote:
>
> I think the G model has graphics, but gives up a little speed.  Or as I
> put it above, a wiiile bit.  LOL
>

I'll be honest - you should probably think about how important multi
monitors are as a priority - is this a desktop or a server?

Also, if you REALLY need the extra monitors, I'd consider sticking the
video card in a smaller slot and using the faster one for IO/etc.  If
you're just doing web browsing or whatever you can get by fine with a
1x slot for your GPU.  Just don't try to do 3D graphics that way
(video/2D will be fine most likely - not GPU encoding though - not
that your cheap GPU will get you much of that).

You might get better results if you dedicate things more to storage vs
compute vs desktop.  Sure, it is more hardware, but you probably won't
have to spend as much on any of them individually.  For example, a box
that is JUST doing storage won't really become obsolete anytime soon -
you'll literally be able to run it until it breaks, or you need more
drives.  You can also tailor the RAM to whatever it actually needs
(usually not much unless you're running Ceph or really want good cache
performance).The compute box will always be a slave to moore's
law, but you won't care about how many PCIe slots it has, so you can
just spend on the CPU+ram.  The desktop for non-gaming could be a
really cheap small quiet box that actually doesn't take up much space
on your desk - probably something off the shelf.  Those name-brand
desktops can be really handy as they'll run off of USB-C, support
things like USB-C video, and so on.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Dale
Michael wrote:
> On Sunday, 9 June 2024 23:13:46 BST Dale wrote:
>
>> P. S.  I ordered a two piece cooler for the m.2 stick.  It's way
>> overkill but it is so cute and costs about the same as much smaller
>> versions.  It has little heat pipes and fins.  O_O 
>>
>> https://www.ebay.com/itm/226160453032 
> This looks like an overkill for a PCIe 4.0 M.2, but if your graphics card 
> will 
> be chucking a lot of heat on it temperatures will stay on the cool side.  I'd 
> be interested to see what temperatures you get with this heatsink installed!


It is certainly overkill.  The closest one that I liked that was two
piece cost about $11 or $12 or so.  Then I ran up on that for $14.  I'm
not info flashy stuff or anything usually.  I'm more about function.  In
this case tho, I just thought it was so cute.  If that m.2 stick ever
blows up, I don't think heat will be the cause.  The m.2 cooler will
sorta match the CPU cooler tho.  I did order some thermal pads.  I have
tons for TO-220, TO-3p and plain TO-3 transistors.  The TO-3p would
work, except for the screw hole in some, but I wanted some that are
solid and thicker.  Based on size, I think they made for m.2. 

When this build is done, I'm going to try and take pics and post
somewhere.  I'll link them in a thread somewhere.  I'll also post some
tests, temps and such.  There's no replacement for real results as
compared to theoretical results.  I'm sure this machine will be faster
than the current rig.  Testing will show just how much.  I suspect my
CPU will run really cool.  I got a fairly large CPU cooler.  At least
the size of my current rig.  My current CPU has never been above 125F. 
The NAS box rarely gets above 102F.  It isn't in a case tho.  It sits on
top of a piece of plywood.  I did screw it down and rigged a way to lock
the hard drive down.  Just to make sure it doesn't run off.  ROFL 

Several packages still in State USPS hub.  They haven't moved, yet. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Michael
On Sunday, 9 June 2024 23:13:46 BST Dale wrote:

> P. S.  I ordered a two piece cooler for the m.2 stick.  It's way
> overkill but it is so cute and costs about the same as much smaller
> versions.  It has little heat pipes and fins.  O_O 
> 
> https://www.ebay.com/itm/226160453032 

This looks like an overkill for a PCIe 4.0 M.2, but if your graphics card will 
be chucking a lot of heat on it temperatures will stay on the cool side.  I'd 
be interested to see what temperatures you get with this heatsink installed!


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Meowie Gamer
I see

Sent from Proton Mail Android


 Original Message 
On 6/9/24 17:41, Frank Steinmetzger  wrote:

>  Am Samstag, 8. Juni 2024, 15:40:48 MESZ schrieb Meowie Gamer:
>  
>  > vim has a WHAT?! You gotta tell me how to use that.
>  
>  Digraphs are graphs (i.e. characters) that are entered using two other
>  characters. Basically it’s the same principle as the X11 compose key, but
>  specific to vim. If you enter :dig[raph], you get a list of all defined such
>  digraphs. The output of the ga command (print ascii info) includes the 
> digraph
>  combo, if one exists for the highlighted character. The unicode plugin 
> behaves
>  similarly. You can also define your own.
>  
>  The feature is used in insert mode and triggered with , after which you
>  press the two characters.
>  
>  For example, there are predefined digraphs for Cyrillic, Greek and Japanese
>  Hiragana and Katakana. And you can paint boxes easily thanks to the mnemonics
>  involved. A lowercase character denotes a thin line, an uppercase a thick
>  line. The characters themselves signify the “direction” of the line; u=up,
>  r=right, l=left, d=down, v=vertical, h=horizontal. So to paint a thin 
> vertical
>  light with a thick line branching to the right, you press vR: ┝.
>  
>  See :help dig for more.
>  --
>  “Privacy laws are our biggest impediment to us obtaining our
>  objectives.” — Michael Eisner, CEO of Disney, 2001
>  
>  
>  
>



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Dale
Frank Steinmetzger wrote:
> Am Samstag, 8. Juni 2024, 19:46:05 MESZ schrieb Dale:
>
 Sadly, the CPU I got is for processing only, no video support it says.
>>> So you got an F model?
>> I got the X model.  It's supposed to be a wttle bit faster.  o_O
> Well as we have been mentioning several times by now: starting with AM5 CPUs, 
> the X models all have integrated graphics. Where does what say “no video 
> support” and in which context?
>


I found it on the AMD website.  The CPU I got is a AM4.  Linky.

https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ryzen-7-5800x.html

>From that:

Graphics Model        Discrete Graphics Card Required

I think the G model has graphics, but gives up a little speed.  Or as I
put it above, a wiiile bit.  LOL 

If some things don't move from the State USPS hub by around 2AM, they
stuck.  The mobo and memory are coming by UPS and already at the local
hub, waiting for the delivery driver in the morning.  The CPU and
several other things are in the USPS hub that has . . . issues. 
Sometimes they require a cattle prod to get them moving.  I wonder
sometimes if they hired zombies to work there.  :/ 

Dale

:-)  :-) 

P. S.  I ordered a two piece cooler for the m.2 stick.  It's way
overkill but it is so cute and costs about the same as much smaller
versions.  It has little heat pipes and fins.  O_O 

https://www.ebay.com/itm/226160453032 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Frank Steinmetzger
Am Samstag, 8. Juni 2024, 15:40:48 MESZ schrieb Meowie Gamer:

> vim has a WHAT?! You gotta tell me how to use that.

Digraphs are graphs (i.e. characters) that are entered using two other 
characters. Basically it’s the same principle as the X11 compose key, but 
specific to vim. If you enter :dig[raph], you get a list of all defined such 
digraphs. The output of the ga command (print ascii info) includes the digraph 
combo, if one exists for the highlighted character. The unicode plugin behaves 
similarly. You can also define your own.

The feature is used in insert mode and triggered with , after which you 
press the two characters.

For example, there are predefined digraphs for Cyrillic, Greek and Japanese 
Hiragana and Katakana. And you can paint boxes easily thanks to the mnemonics 
involved. A lowercase character denotes a thin line, an uppercase a thick 
line. The characters themselves signify the “direction” of the line; u=up, 
r=right, l=left, d=down, v=vertical, h=horizontal. So to paint a thin vertical 
light with a thick line branching to the right, you press vR: ┝.

See :help dig for more.
-- 
“Privacy laws are our biggest impediment to us obtaining our
objectives.” — Michael Eisner, CEO of Disney, 2001





Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Frank Steinmetzger
Am Samstag, 8. Juni 2024, 19:46:05 MESZ schrieb Dale:

> >> Sadly, the CPU I got is for processing only, no video support it says.
> > 
> > So you got an F model?
> 
> I got the X model.  It's supposed to be a wttle bit faster.  o_O

Well as we have been mentioning several times by now: starting with AM5 CPUs, 
the X models all have integrated graphics. Where does what say “no video 
support” and in which context?

-- 
An empty head is easier to nod with.






Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Dale
Michael wrote:
> On Saturday, 8 June 2024 23:41:58 BST Dale wrote:
>> Michael wrote:
>>> On Saturday, 8 June 2024 18:46:05 BST Dale wrote:
 I got the little m.2 thing today.  It's a lot smaller than I expected.
 A whole lot smaller.  It's fairly tiny actually.  They look bigger in
 pictures or on video.  This reminds me of the discussion on the number
 of transistors on a chip.  I bet they packed tight in there.  I bought a
 heat sink that goes on each individual chip, one on controller, one on
 data chip, two if it has two data chips.  Anyway, it has only two chips
 so I got extra heat sinks.  LOL  They fairly large since the mobo has
 nothing on top of them.  I got plenty of room.  That said, anyone else
 notice they make heat sinks for those things that have heat pipes and
 itty bitty fans??  O_O  It does make them run cool tho.  :/  I like my
 little heat sinks better.  Pretty good size and no moving parts.  They
 come in a couple colors.  Linky.

 https://www.ebay.com/itm/254119864180

 Oh, for those reading this.  The data controller chip is a little
 thinner than the data chip.  I confirmed that on mine.  If you don't use
 a heatsink that has a thicker pad for that, it leaves a gap and the
 controller chip doesn't make contact which means it runs hotter.  As I
 mentioned earlier, the controller seems to produce more heat so it needs
 the heatsink more than the data chip.  On videos, some people use a
 additional pad to make up the difference on the controller chip.
>>> Additional pad, or alternatively two pads of different thicknesses, with
>>> the thicker pad fitted on the controller chip.
>>>
>>> https://www.youtube.com/watch?v=VIUU5ogVHg8
>> I think this is the one I watched.  I'm not 100% certain.  I watched
>> several different ones including reviews etc. 
>>
>> https://www.youtube.com/watch?v=I8Z09nU554Q
>>
 I
 noticed on a couple heatsinks, they mention the difference and show they
 use a pad that makes full contact on both chips.  It looks like the
 thermal pad is thicker and more squishy.  One I saw looks like it is
 just a little thicker on the controller end.
>>> Thermal pads are spongy and compressible.  When you screw down the metal
>>> heatsink on the NVMe stick you should find the thermal pad area over the
>>> NAND chips will just squish more than over the controller.  Not sure if
>>> it makes any difference in performance using a single thickness thermal
>>> pad, as long as the thickness of it is enough to make good contact with
>>> the controller chip, after it has squished over the NAND chips.  I would
>>> think for normal PC operation using a different heatsink to what the MoBo
>>> comes with would be an overkill and it may not make much of a difference
>>> anyway.  I fitted a single thickness thermal pad on my NVMe and it idles
>>> at ~46-47°C pushing up to ~54-57°C when being written to in daily
>>> operations.  I haven't run any benchmark load tests to see how hot it may
>>> get, but with the above temperature range I would think the thermal pad
>>> is working fine.  :-)
>> I think it depends on the thickness of the pad.  Some pads look really
>> thin on cheaper devices while some appear thicker on ones costing a
>> little bit more.  A thicker pad has more ability to fill in the
>> difference with likely not even a lot of pressure.  A thin pad or what
>> some refer to as tape which is not compressible much at all would likely
>> not work as well unless you either put a lot of pressure on it or added
>> a little pad to make up the difference.
> As I understand it the tape Vs thermal pad is not just a question of 
> thickness.  They are for different purposes and used in different places.
>
> The tape is for electrical insulation and perhaps (some) thermal 
> conductivity.  
> It is placed underneath the M.2 stick, when the stick is fitted inside a 
> heatsink case.
>
> The thermal pad is placed on top of the chips and below the heatsink's heat 
> dissipating plate.  The branded M.2 OEM sticker on the chips is meant to 
> offer 
> electrical insulation, while the thermal pad is there to conduct heat away 
> from the chips and transfer it to the heatsink.
>
> The stock MoBo M.2 heatsink is often screwed down on top of the M.2 stick, 
> but 
> some makes/models may provide a wrap around heatsink enclosure as many 
> aftermarket heatsinks do.  Beware PCIe 5.0 M.2s run hotter than PCie 4.0, or 
> 3.0, especially above 1TB and will require a heatsink or will overheat and 
> end 
> up throttling themselves.

Some tapes are used as a adhesive, double sided, to hold a heatsink in
place.  I've used several that are very thin.  Most are as thin as
commonly used Scotch tape which isn't used for thermal stuff naturally. 
Basically, they thin which is why heat does transfer but I'm sure they
are made different than say double sided Scotch tape.  Thing is, they
tend to be permanent.  If 

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Michael
On Saturday, 8 June 2024 23:41:58 BST Dale wrote:
> Michael wrote:
> > On Saturday, 8 June 2024 18:46:05 BST Dale wrote:
> >> I got the little m.2 thing today.  It's a lot smaller than I expected.
> >> A whole lot smaller.  It's fairly tiny actually.  They look bigger in
> >> pictures or on video.  This reminds me of the discussion on the number
> >> of transistors on a chip.  I bet they packed tight in there.  I bought a
> >> heat sink that goes on each individual chip, one on controller, one on
> >> data chip, two if it has two data chips.  Anyway, it has only two chips
> >> so I got extra heat sinks.  LOL  They fairly large since the mobo has
> >> nothing on top of them.  I got plenty of room.  That said, anyone else
> >> notice they make heat sinks for those things that have heat pipes and
> >> itty bitty fans??  O_O  It does make them run cool tho.  :/  I like my
> >> little heat sinks better.  Pretty good size and no moving parts.  They
> >> come in a couple colors.  Linky.
> >> 
> >> https://www.ebay.com/itm/254119864180
> >> 
> >> Oh, for those reading this.  The data controller chip is a little
> >> thinner than the data chip.  I confirmed that on mine.  If you don't use
> >> a heatsink that has a thicker pad for that, it leaves a gap and the
> >> controller chip doesn't make contact which means it runs hotter.  As I
> >> mentioned earlier, the controller seems to produce more heat so it needs
> >> the heatsink more than the data chip.  On videos, some people use a
> >> additional pad to make up the difference on the controller chip.
> > 
> > Additional pad, or alternatively two pads of different thicknesses, with
> > the thicker pad fitted on the controller chip.
> > 
> > https://www.youtube.com/watch?v=VIUU5ogVHg8
> 
> I think this is the one I watched.  I'm not 100% certain.  I watched
> several different ones including reviews etc. 
> 
> https://www.youtube.com/watch?v=I8Z09nU554Q
> 
> >> I
> >> noticed on a couple heatsinks, they mention the difference and show they
> >> use a pad that makes full contact on both chips.  It looks like the
> >> thermal pad is thicker and more squishy.  One I saw looks like it is
> >> just a little thicker on the controller end.
> > 
> > Thermal pads are spongy and compressible.  When you screw down the metal
> > heatsink on the NVMe stick you should find the thermal pad area over the
> > NAND chips will just squish more than over the controller.  Not sure if
> > it makes any difference in performance using a single thickness thermal
> > pad, as long as the thickness of it is enough to make good contact with
> > the controller chip, after it has squished over the NAND chips.  I would
> > think for normal PC operation using a different heatsink to what the MoBo
> > comes with would be an overkill and it may not make much of a difference
> > anyway.  I fitted a single thickness thermal pad on my NVMe and it idles
> > at ~46-47°C pushing up to ~54-57°C when being written to in daily
> > operations.  I haven't run any benchmark load tests to see how hot it may
> > get, but with the above temperature range I would think the thermal pad
> > is working fine.  :-)
> 
> I think it depends on the thickness of the pad.  Some pads look really
> thin on cheaper devices while some appear thicker on ones costing a
> little bit more.  A thicker pad has more ability to fill in the
> difference with likely not even a lot of pressure.  A thin pad or what
> some refer to as tape which is not compressible much at all would likely
> not work as well unless you either put a lot of pressure on it or added
> a little pad to make up the difference.

As I understand it the tape Vs thermal pad is not just a question of 
thickness.  They are for different purposes and used in different places.

The tape is for electrical insulation and perhaps (some) thermal conductivity.  
It is placed underneath the M.2 stick, when the stick is fitted inside a 
heatsink case.

The thermal pad is placed on top of the chips and below the heatsink's heat 
dissipating plate.  The branded M.2 OEM sticker on the chips is meant to offer 
electrical insulation, while the thermal pad is there to conduct heat away 
from the chips and transfer it to the heatsink.

The stock MoBo M.2 heatsink is often screwed down on top of the M.2 stick, but 
some makes/models may provide a wrap around heatsink enclosure as many 
aftermarket heatsinks do.  Beware PCIe 5.0 M.2s run hotter than PCie 4.0, or 
3.0, especially above 1TB and will require a heatsink or will overheat and end 
up throttling themselves.

> I did a ebay search and found
> the generic pads coming in from thicknesses of as little as .2mm to as
> thick as like 2mm or so.  The 2mm thick one has lots of wiggle room to
> contact both chips and easily I'd think.  The .2mm not so much but could
> if installed right, I guess.  Tape would likely be the worst since it
> doesn't compress much if any.

I used calipers to measure the gap between the stock heatsink and the 

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-08 Thread Dale
Michael wrote:
> On Saturday, 8 June 2024 18:46:05 BST Dale wrote:
>
>> I got the little m.2 thing today.  It's a lot smaller than I expected. 
>> A whole lot smaller.  It's fairly tiny actually.  They look bigger in
>> pictures or on video.  This reminds me of the discussion on the number
>> of transistors on a chip.  I bet they packed tight in there.  I bought a
>> heat sink that goes on each individual chip, one on controller, one on
>> data chip, two if it has two data chips.  Anyway, it has only two chips
>> so I got extra heat sinks.  LOL  They fairly large since the mobo has
>> nothing on top of them.  I got plenty of room.  That said, anyone else
>> notice they make heat sinks for those things that have heat pipes and
>> itty bitty fans??  O_O  It does make them run cool tho.  :/  I like my
>> little heat sinks better.  Pretty good size and no moving parts.  They
>> come in a couple colors.  Linky.
>>
>> https://www.ebay.com/itm/254119864180
>>
>> Oh, for those reading this.  The data controller chip is a little
>> thinner than the data chip.  I confirmed that on mine.  If you don't use
>> a heatsink that has a thicker pad for that, it leaves a gap and the
>> controller chip doesn't make contact which means it runs hotter.  As I
>> mentioned earlier, the controller seems to produce more heat so it needs
>> the heatsink more than the data chip.  On videos, some people use a
>> additional pad to make up the difference on the controller chip.
> Additional pad, or alternatively two pads of different thicknesses, with the 
> thicker pad fitted on the controller chip.
>
> https://www.youtube.com/watch?v=VIUU5ogVHg8

I think this is the one I watched.  I'm not 100% certain.  I watched
several different ones including reviews etc. 

https://www.youtube.com/watch?v=I8Z09nU554Q


>
>> I
>> noticed on a couple heatsinks, they mention the difference and show they
>> use a pad that makes full contact on both chips.  It looks like the
>> thermal pad is thicker and more squishy.  One I saw looks like it is
>> just a little thicker on the controller end. 
> Thermal pads are spongy and compressible.  When you screw down the metal 
> heatsink on the NVMe stick you should find the thermal pad area over the NAND 
> chips will just squish more than over the controller.  Not sure if it makes 
> any difference in performance using a single thickness thermal pad, as long 
> as 
> the thickness of it is enough to make good contact with the controller chip, 
> after it has squished over the NAND chips.  I would think for normal PC 
> operation using a different heatsink to what the MoBo comes with would be an 
> overkill and it may not make much of a difference anyway.  I fitted a single 
> thickness thermal pad on my NVMe and it idles at ~46-47°C pushing up to 
> ~54-57°C when being written to in daily operations.  I haven't run any 
> benchmark load tests to see how hot it may get, but with the above 
> temperature 
> range I would think the thermal pad is working fine.  :-)


I think it depends on the thickness of the pad.  Some pads look really
thin on cheaper devices while some appear thicker on ones costing a
little bit more.  A thicker pad has more ability to fill in the
difference with likely not even a lot of pressure.  A thin pad or what
some refer to as tape which is not compressible much at all would likely
not work as well unless you either put a lot of pressure on it or added
a little pad to make up the difference.  I did a ebay search and found
the generic pads coming in from thicknesses of as little as .2mm to as
thick as like 2mm or so.  The 2mm thick one has lots of wiggle room to
contact both chips and easily I'd think.  The .2mm not so much but could
if installed right, I guess.  Tape would likely be the worst since it
doesn't compress much if any.

The mobo I'm getting doesn't have any heatsink at all.  The original one
did.  To be honest tho, I'd prefer to put my own on there.  On some
older boards I have, the NAS1 box for example, I added some heatsinks to
the little regulator things close to the CPU.  They get rather warm when
compiling packages.  They not to bad when sitting idle tho.  Those
little heatsinks keep them pretty cool.  If the current mobos didn't
have a heatsink on those already, I'd likely add one that is much more
efficient than what some mobos come with.  It's like CPUs.  They used to
come with a cooler, it seems they don't anymore for most I guess.  I've
never used a OEM cooler because I want my CPU to run cooler than one you
can almost cook a egg on.  As I tell people, I build puters like a tank.
I like everything to run cool and well below the max temps. 

I still can't get over how tiny that m.2 is.  Kinda looking forward to
the stuff coming in.  I just hope the power supply cables are long
enough.  That Fractal case is rather large.  About like my Cooler Master. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-08 Thread Michael
On Saturday, 8 June 2024 18:46:05 BST Dale wrote:

> I got the little m.2 thing today.  It's a lot smaller than I expected. 
> A whole lot smaller.  It's fairly tiny actually.  They look bigger in
> pictures or on video.  This reminds me of the discussion on the number
> of transistors on a chip.  I bet they packed tight in there.  I bought a
> heat sink that goes on each individual chip, one on controller, one on
> data chip, two if it has two data chips.  Anyway, it has only two chips
> so I got extra heat sinks.  LOL  They fairly large since the mobo has
> nothing on top of them.  I got plenty of room.  That said, anyone else
> notice they make heat sinks for those things that have heat pipes and
> itty bitty fans??  O_O  It does make them run cool tho.  :/  I like my
> little heat sinks better.  Pretty good size and no moving parts.  They
> come in a couple colors.  Linky.
> 
> https://www.ebay.com/itm/254119864180
> 
> Oh, for those reading this.  The data controller chip is a little
> thinner than the data chip.  I confirmed that on mine.  If you don't use
> a heatsink that has a thicker pad for that, it leaves a gap and the
> controller chip doesn't make contact which means it runs hotter.  As I
> mentioned earlier, the controller seems to produce more heat so it needs
> the heatsink more than the data chip.  On videos, some people use a
> additional pad to make up the difference on the controller chip.

Additional pad, or alternatively two pads of different thicknesses, with the 
thicker pad fitted on the controller chip.

https://www.youtube.com/watch?v=VIUU5ogVHg8


> I
> noticed on a couple heatsinks, they mention the difference and show they
> use a pad that makes full contact on both chips.  It looks like the
> thermal pad is thicker and more squishy.  One I saw looks like it is
> just a little thicker on the controller end. 

Thermal pads are spongy and compressible.  When you screw down the metal 
heatsink on the NVMe stick you should find the thermal pad area over the NAND 
chips will just squish more than over the controller.  Not sure if it makes 
any difference in performance using a single thickness thermal pad, as long as 
the thickness of it is enough to make good contact with the controller chip, 
after it has squished over the NAND chips.  I would think for normal PC 
operation using a different heatsink to what the MoBo comes with would be an 
overkill and it may not make much of a difference anyway.  I fitted a single 
thickness thermal pad on my NVMe and it idles at ~46-47°C pushing up to 
~54-57°C when being written to in daily operations.  I haven't run any 
benchmark load tests to see how hot it may get, but with the above temperature 
range I would think the thermal pad is working fine.  :-)


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-08 Thread Dale
Frank Steinmetzger wrote:
> Am Samstag, 8. Juni 2024, 04:05:17 MESZ schrieb Dale:
>
>
>>> DisplayPort supports daisy-chaining. So if you do get another monitor some
>>> day, look for one that has this feature and you can drive two monitors
>>> with
>>> one port on the PC.
>> That's something I didn't know.  I wondered why they had that when a
>> HDMI port is about the same size and can handle about the same
>> resolution.  It has abilities HDMI doesn't.  Neat.  :-D
> Polemically speaking, HDMI is designed for the concerns of the MAFIA (Music 
> and Film Industry of America) with stuff like DRM. DisplayPort is technically 
> the better protocol, for example with more bandwidth and it is open. There 
> was 
> news recently that the HDMI forum would not allow AMD to implement HDMI 2.1 
> in 
> its open source driver, which means no 4K 120 Hz for Linux users.

That sucks about HDMI.  Next time I buy a video card, I need to get one
with display port outputs.  It seems more Linux friendly.  :-D

>> Sadly, the CPU I got is for processing only, no video support it says.
> So you got an F model?

I got the X model.  It's supposed to be a wttle bit faster.  o_O


>
>>> But what I also just remembered: only the ×16 GPU slot and the primary M.2
>>> slots (which are often one gen faster than the other M.2 slots) are
>>> connected to the CPU via dedicated links. All other PCIe slots are behind
>>> the chipset. And that in turn is connected to the CPU via a PCIe 4.0×4
>>> link. This is probably the technical reason why there are so few boards
>>> with slots wider than ×4 – there is just no way to make use of them,
>>> because they all most go through that ×4 bottleneck to the CPU.
>>>
>>> ┌───┐ 5.0×4 ┌───┐ 4.0×4 ┌─┐   ┌───┐
>>> │M.2┝===┥CPU┝━━━┥ Chipset ┝━━━┥M.2│
>>> └───┘   └─┰─┘   └─┰─┰─┘   └───┘
>>>
>>> 5.0×16┃   ┃ ┃
>>> 
>>> ┌─┸─┐┌┸─┐ ┌─┸┐
>>> │GPU││PCIe 1│ │PCIe 2│
>>> └───┘└──┘ └──┘
>>> […]
>> Nice block diagram.  You use software to make that?
> Yes, vim’s builtin digraph feature. O:-)
>

I was hoping so.  Doing that by hand would take a LOT of time.  I never
could figure out vim.  I use nano on command line and Kwrite in a GUI. 
Care to guess which I really prefer???  LOL 

I got the little m.2 thing today.  It's a lot smaller than I expected. 
A whole lot smaller.  It's fairly tiny actually.  They look bigger in
pictures or on video.  This reminds me of the discussion on the number
of transistors on a chip.  I bet they packed tight in there.  I bought a
heat sink that goes on each individual chip, one on controller, one on
data chip, two if it has two data chips.  Anyway, it has only two chips
so I got extra heat sinks.  LOL  They fairly large since the mobo has
nothing on top of them.  I got plenty of room.  That said, anyone else
notice they make heat sinks for those things that have heat pipes and
itty bitty fans??  O_O  It does make them run cool tho.  :/  I like my
little heat sinks better.  Pretty good size and no moving parts.  They
come in a couple colors.  Linky.

https://www.ebay.com/itm/254119864180

Oh, for those reading this.  The data controller chip is a little
thinner than the data chip.  I confirmed that on mine.  If you don't use
a heatsink that has a thicker pad for that, it leaves a gap and the
controller chip doesn't make contact which means it runs hotter.  As I
mentioned earlier, the controller seems to produce more heat so it needs
the heatsink more than the data chip.  On videos, some people use a
additional pad to make up the difference on the controller chip.  I
noticed on a couple heatsinks, they mention the difference and show they
use a pad that makes full contact on both chips.  It looks like the
thermal pad is thicker and more squishy.  One I saw looks like it is
just a little thicker on the controller end. 

I guess Monday will be the big day, IF, big IF, they don't get hung up
in the local USPS sorting hub.  That place is a little better now but
some packages still get hung up down there. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-08 Thread Meowie Gamer
vim has a WHAT?! You gotta tell me how to use that.




Sent with Proton Mail secure email.

On Saturday, June 8th, 2024 at 9:39 AM, Frank Steinmetzger  
wrote:

> Am Samstag, 8. Juni 2024, 04:05:17 MESZ schrieb Dale:
> 
> > > DisplayPort supports daisy-chaining. So if you do get another monitor some
> > > day, look for one that has this feature and you can drive two monitors
> > > with
> > > one port on the PC.
> > 
> > That's something I didn't know. I wondered why they had that when a
> > HDMI port is about the same size and can handle about the same
> > resolution. It has abilities HDMI doesn't. Neat. :-D
> 
> 
> Polemically speaking, HDMI is designed for the concerns of the MAFIA (Music
> and Film Industry of America) with stuff like DRM. DisplayPort is technically
> the better protocol, for example with more bandwidth and it is open. There was
> news recently that the HDMI forum would not allow AMD to implement HDMI 2.1 in
> its open source driver, which means no 4K 120 Hz for Linux users.
> 
> > Sadly, the CPU I got is for processing only, no video support it says.
> 
> 
> So you got an F model?
> 
> > > But what I also just remembered: only the ×16 GPU slot and the primary M.2
> > > slots (which are often one gen faster than the other M.2 slots) are
> > > connected to the CPU via dedicated links. All other PCIe slots are behind
> > > the chipset. And that in turn is connected to the CPU via a PCIe 4.0×4
> > > link. This is probably the technical reason why there are so few boards
> > > with slots wider than ×4 – there is just no way to make use of them,
> > > because they all most go through that ×4 bottleneck to the CPU.
> > > 
> > > ┌───┐ 5.0×4 ┌───┐ 4.0×4 ┌─┐ ┌───┐
> > > │M.2┝===┥CPU┝━━━┥ Chipset ┝━━━┥M.2│
> > > └───┘ └─┰─┘ └─┰─┰─┘ └───┘
> > > 
> > > 5.0×16┃ ┃ ┃
> > > 
> > > ┌─┸─┐ ┌┸─┐ ┌─┸┐
> > > │GPU│ │PCIe 1│ │PCIe 2│
> > > └───┘ └──┘ └──┘
> > > […]
> > 
> > Nice block diagram. You use software to make that?
> 
> 
> Yes, vim’s builtin digraph feature. O:-)
> 
> --
> Team work:
> Everyone does what he wants, nobody does what he should, and all play along.



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-08 Thread Frank Steinmetzger
Am Samstag, 8. Juni 2024, 04:05:17 MESZ schrieb Dale:


> > DisplayPort supports daisy-chaining. So if you do get another monitor some
> > day, look for one that has this feature and you can drive two monitors
> > with
> > one port on the PC.
> 
> That's something I didn't know.  I wondered why they had that when a
> HDMI port is about the same size and can handle about the same
> resolution.  It has abilities HDMI doesn't.  Neat.  :-D

Polemically speaking, HDMI is designed for the concerns of the MAFIA (Music 
and Film Industry of America) with stuff like DRM. DisplayPort is technically 
the better protocol, for example with more bandwidth and it is open. There was 
news recently that the HDMI forum would not allow AMD to implement HDMI 2.1 in 
its open source driver, which means no 4K 120 Hz for Linux users.

> Sadly, the CPU I got is for processing only, no video support it says.

So you got an F model?

> > But what I also just remembered: only the ×16 GPU slot and the primary M.2
> > slots (which are often one gen faster than the other M.2 slots) are
> > connected to the CPU via dedicated links. All other PCIe slots are behind
> > the chipset. And that in turn is connected to the CPU via a PCIe 4.0×4
> > link. This is probably the technical reason why there are so few boards
> > with slots wider than ×4 – there is just no way to make use of them,
> > because they all most go through that ×4 bottleneck to the CPU.
> > 
> > ┌───┐ 5.0×4 ┌───┐ 4.0×4 ┌─┐   ┌───┐
> > │M.2┝===┥CPU┝━━━┥ Chipset ┝━━━┥M.2│
> > └───┘   └─┰─┘   └─┰─┰─┘   └───┘
> > 
> > 5.0×16┃   ┃ ┃
> > 
> > ┌─┸─┐┌┸─┐ ┌─┸┐
> > │GPU││PCIe 1│ │PCIe 2│
> > └───┘└──┘ └──┘
> > […]
> 
> Nice block diagram.  You use software to make that?

Yes, vim’s builtin digraph feature. O:-)

-- 
Team work:
Everyone does what he wants, nobody does what he should, and all play along.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-07 Thread Dale
Frank Steinmetzger wrote:
> Am Tue, Jun 04, 2024 at 05:49:31AM -0400 schrieb Rich Freeman:
>
>> On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
>>> I did some more digging.  It seems that all the LSI SAS cards I found
>>> need a PCIe x8 slot.  The only slot available is the one intended for
>>> video.
>> The board you linked has 2 4x slots that are physically 16x, so the
>> card should work fine in those, just at 4x speed.
> I can never remember the available throughput for each generation. So I 
> think about my own board: it as a 2.0×2 NVMe slot that gives me 1 GB/s 
> theoretical bandwidth. So if you have 3.0×4, that is twice the lanes and 
> twice the BW/lane, which yields 4 GB/s gross throughput. If you attach 
> spinning rust to that, you’d need around 15 to 20 HDDs to saturate that 
> link. So I wouldn’t worry too much about underperformance.

Now this is interesting.  So a good PCIe V3 card with a X4 connection
could handle *almost* two dozen hard drives and still be able to
transfer about all the data the drives can receive or send.  That helps
me a lot.  Two 12 port cards with say 8 drives connected would handle
pretty much everything I can throw at it.  The Fractal case can't hold
enough drives to fill up all the cards and mobo ports, yet anyway.  24
ports on cards, 4 on mobo, 28 hard drives.  If I use a m.2 to SATA
thing, that's another 6 ports, if they not sharing with a PCIe slot that
is. 


>>> I'd rather not
>>> use it on the new build because I've thought about having another
>>> monitor added for desktop use so I would need three ports at least.
> DisplayPort supports daisy-chaining. So if you do get another monitor some 
> day, look for one that has this feature and you can drive two monitors with 
> one port on the PC.

That's something I didn't know.  I wondered why they had that when a
HDMI port is about the same size and can handle about the same
resolution.  It has abilities HDMI doesn't.  Neat.  :-D  Sadly, the CPU
I got is for processing only, no video support it says.


>>> The little SATA controllers I currently use tend to only need PCIe x1.
>>> That is slower but at least it works.
> PCIe 3.0×1 is still fast enough for four HDDs at full speed. You may get 
> saturation at the outermost tracks, but how often does that happen anyways?
> I can think of two cases that produce enough I/O for that:
> - copy stuff from one internal RAID to another
>   (you use LVM, does that support striping to distribute I/O?)
> - a RAID scrub
>
> Everything else involves two disks at most—when you copy stuff from one to 
> another. Getting data into the system is limited by the network which is far 
> slower than PCIe. And a full SMART test does not use the data bus at all.
>
>
> But what I also just remembered: only the ×16 GPU slot and the primary M.2 
> slots (which are often one gen faster than the other M.2 slots) are 
> connected to the CPU via dedicated links. All other PCIe slots are behind 
> the chipset. And that in turn is connected to the CPU via a PCIe 4.0×4 link. 
> This is probably the technical reason why there are so few boards with slots 
> wider than ×4 – there is just no way to make use of them, because they all 
> most go through that ×4 bottleneck to the CPU.
>
> ┌───┐ 5.0×4 ┌───┐ 4.0×4 ┌─┐   ┌───┐
> │M.2┝===┥CPU┝━━━┥ Chipset ┝━━━┥M.2│
> └───┘   └─┰─┘   └─┰─┰─┘   └───┘
> 5.0×16┃   ┃ ┃
> ┌─┸─┐┌┸─┐ ┌─┸┐
> │GPU││PCIe 1│ │PCIe 2│
> └───┘└──┘ └──┘
>
> Here are block diagrams of AM5 B- and X-chipsets and a more verbose 
> explanation:
> https://www.anandtech.com/show/17585/amd-zen-4-ryzen-9-7950x-and-ryzen-5-7600x-review-retaking-the-high-end/4
>
> Theoretically, the PCIe controller in the CPU has the ability to split up 
> the ×16 GPU link into 2×8 and other subdivisions, but that would cripple the 
> GPU, which is the normal use case for such mobos, so the feature is very 
> seldomly found.
>
> If I look at all available AM5 mobos that have at least two ×8 slots, there 
> are just seven out of 126: https://skinflint.co.uk/?cat=mbam5=19227_2
> You can also use the filter to look for boards with 3 ×4 slots.
>


Nice block diagram.  You use software to make that?  Anyway, I looked at
the skinflint link, those boards are pricey but still don't feature lots
of PCIe slots.  I've came to the realization that a lot of mobos are
geared more towards gamers whether it is in the name or not.  Very few,
maybe any, are basic mobos with the ability to expand with whatever
tools a user needs, be it lots of USB ports, wi-fi, drive controllers or
any other device needed.  My current Gigabyte has the basic tools;
ethernet, USB ports for keyboard and mouse plus a few extras for the
front of the case.  Then it has PCIe slots for whatever the user needs
to add.  They can even be custom controller cards for lights, freezers
or whatever.  Current mobos, don't have that ability.  One would think

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-06 Thread Frank Steinmetzger
Am Tue, Jun 04, 2024 at 05:49:31AM -0400 schrieb Rich Freeman:

> On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
> >
> > I did some more digging.  It seems that all the LSI SAS cards I found
> > need a PCIe x8 slot.  The only slot available is the one intended for
> > video.
> 
> The board you linked has 2 4x slots that are physically 16x, so the
> card should work fine in those, just at 4x speed.

I can never remember the available throughput for each generation. So I 
think about my own board: it as a 2.0×2 NVMe slot that gives me 1 GB/s 
theoretical bandwidth. So if you have 3.0×4, that is twice the lanes and 
twice the BW/lane, which yields 4 GB/s gross throughput. If you attach 
spinning rust to that, you’d need around 15 to 20 HDDs to saturate that 
link. So I wouldn’t worry too much about underperformance.

> > I'd rather not
> > use it on the new build because I've thought about having another
> > monitor added for desktop use so I would need three ports at least.

DisplayPort supports daisy-chaining. So if you do get another monitor some 
day, look for one that has this feature and you can drive two monitors with 
one port on the PC.

> > The little SATA controllers I currently use tend to only need PCIe x1.
> > That is slower but at least it works.

PCIe 3.0×1 is still fast enough for four HDDs at full speed. You may get 
saturation at the outermost tracks, but how often does that happen anyways?
I can think of two cases that produce enough I/O for that:
- copy stuff from one internal RAID to another
  (you use LVM, does that support striping to distribute I/O?)
- a RAID scrub

Everything else involves two disks at most—when you copy stuff from one to 
another. Getting data into the system is limited by the network which is far 
slower than PCIe. And a full SMART test does not use the data bus at all.


But what I also just remembered: only the ×16 GPU slot and the primary M.2 
slots (which are often one gen faster than the other M.2 slots) are 
connected to the CPU via dedicated links. All other PCIe slots are behind 
the chipset. And that in turn is connected to the CPU via a PCIe 4.0×4 link. 
This is probably the technical reason why there are so few boards with slots 
wider than ×4 – there is just no way to make use of them, because they all 
most go through that ×4 bottleneck to the CPU.

┌───┐ 5.0×4 ┌───┐ 4.0×4 ┌─┐   ┌───┐
│M.2┝===┥CPU┝━━━┥ Chipset ┝━━━┥M.2│
└───┘   └─┰─┘   └─┰─┰─┘   └───┘
5.0×16┃   ┃ ┃
┌─┸─┐┌┸─┐ ┌─┸┐
│GPU││PCIe 1│ │PCIe 2│
└───┘└──┘ └──┘

Here are block diagrams of AM5 B- and X-chipsets and a more verbose 
explanation:
https://www.anandtech.com/show/17585/amd-zen-4-ryzen-9-7950x-and-ryzen-5-7600x-review-retaking-the-high-end/4

Theoretically, the PCIe controller in the CPU has the ability to split up 
the ×16 GPU link into 2×8 and other subdivisions, but that would cripple the 
GPU, which is the normal use case for such mobos, so the feature is very 
seldomly found.

If I look at all available AM5 mobos that have at least two ×8 slots, there 
are just seven out of 126: https://skinflint.co.uk/?cat=mbam5=19227_2
You can also use the filter to look for boards with 3 ×4 slots.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

“If I could explain it to the average person, I wouldn't have been worth
the Nobel Prize.” – Richard Feynman





Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-06 Thread Dale
Michael wrote:
> On Thursday, 6 June 2024 04:54:41 BST Dale wrote:
>
>> I was digging around Ebay.  Ran up on a used combo and then had a crazy
>> idea.  I found a ASUS B550-plus AC-HES mobo that is AM4.  I took that
>> idea and started building a combo with new parts.  CPU, Ryzen 7 5800X
>> and my little 4 port video card if CPU has no video support.  AMD says
>> it does.
> The Ryzen 7 5800X has 8 CPU cores, but no graphics cores:
>
> https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ryzen-7-5800x.html
>
> If you take a look at the manual for the MoBo you'll see the 5000/3000 series 
> CPUs will be able to support up to four PCIe Gen 4 SSDs on the top PCIe x16 
> slot, if you install e.g. a Hyper M.2 x16 expansion card:
>
> https://www.asus.com/uk/motherboards-components/motherboards/accessories/
> hyper-m-2-x16-card-v2/
>
> but the same PCIe x16 slot will only be able to support up to three PCIe Gen 
> 3 
> SSDs with an expansion card, if you installed a Ryzen 5000/3000 G-Series 
> processor which has the integrated graphics cores.
>

I'm planning to use the video card I already have with 4 ports anyway. 
That will allow me to have another monitor for when I'm processing
files.  It's not much of a card but I don't need much anyway. 


>> The mobo has three PCIe X1 slots and a X4 slot.
> Actually it is more complicated, because bandwidth is shared across the PCIe 
> x16 slots:
>
> https://www.asus.com/motherboards-components/motherboards/prime/prime-b550-plus-ac-hes/
>
> It has four PCIe x16 slots (the top being the PCIe Gen 4 and the rest PCIe 
> Gen 
> 3), but if any of the bottom 3 slots are occupied, the top slot will *only* 
> run in x1 instead of x4 mode.  So you'll end up with four PCIe x16 slots, all 
> running in x1 mode.
>
> In addition, if you decide to plug in a NVMe card in the second M.2 port, 
> then 
> 2 out of the 6 SATA ports will be disabled - their bandwidth eaten up by the 
> second M.2 port.
>

This is still a improvement over the other mobo which didn't have the
slots at all, shared or otherwise.  That's the one thing that I did not
like about the other mobo.  The rest of it was fine.  The lack of PCIe
slots just become a deal breaker for me.

I plan to use the M.2 thingy that is closest to the CPU for the OS. 
>From my reading it is not shared but even if it is, it will still be
faster than my current SATA II drive, yes, SATA 2.  I don't plan to use
the one that is shared with the PCIe X1 slot.  I may lose two SATA mobo
ports but with a card but I can have either 10 or 12 ports on that
card.  Slower but still fast enough. 


>> A SAS card
>> would work just slower than when in X8 slot but the little PCIe 10 or 12
>> port cards would work fine.  Gives me 30 drives total at least on the
>> cards plus the four on the mobo, two goes away with using one of the
>> PCIe slots most likely.  Bifurcation I think they call it when they
>> share roadways.
> I can't find a PCIe x8 slot on the above MoBo ...  :-/
>

When I was discussing SAS or HBA cards, they were all X8 cards but Rich
reminded me that it would work in a X4 slot, just slower.  So, I'll
likely just use the X1 SATA cards I already have extras of and it is
whatever speed it can manage.  If I had a X8 slot and a HBA card that
could handle say 20 drives, then I'd go that route.  Since I really only
have a X4 slot, well, why bother. 


>> My thinking.  Build above now.  In a year, or two, I can build either
>> the rig I was working on a lot cheaper or a even newer rig that is even
>> faster if say AM6 socket CPUs have arrived.  Then the rig above with
>> some hard drive options can become the new NAS box.  I can then move
>> some drives out of the main rig, newer one a year or so down the road,
>> and not need so many PCIe slots in the main rig, hopefully anyway.  I
>> may even warm up to the idea of using USB for hard drives.  I'm
>> surprised hard drives don't come with USB connections instead of SATA
>> already.
>>
>> The only downside, the NAS box will have to run 24/7 as well.  Then I
>> have two puters running all the time.  To offset that, the combo above
>> does pull a lot less power than my current rig.  Not a huge difference
>> but fair amount.  Odds are the build a year or so down the road will
>> also pull less power than current rig.  I could end up with same amount
>> of power usage or less, even with two running instead of one.
>>
>> I said it was a crazy idea.  LOL   This time tho, I'm sort of planning
>> ahead instead of just coming up with a temporary fix all the time.  This
>> is also a little cheaper but still faster.  Another big thing, newer as
>> well.  My current rig is about 10 or 11 years old.  It may run another 5
>> or so years but could go out anytime.  At least I'll have a newer rig
>> not likely to let the smoke out.  Plus have a path to a more sane
>> setup.  I just need to get one of those chia harvester cases that holds
>> 40 or so hard drives.  ROFLMBO
>>
>> Dale
>>
>> :-)  

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-06 Thread Michael
On Thursday, 6 June 2024 04:54:41 BST Dale wrote:

> I was digging around Ebay.  Ran up on a used combo and then had a crazy
> idea.  I found a ASUS B550-plus AC-HES mobo that is AM4.  I took that
> idea and started building a combo with new parts.  CPU, Ryzen 7 5800X
> and my little 4 port video card if CPU has no video support.  AMD says
> it does.

The Ryzen 7 5800X has 8 CPU cores, but no graphics cores:

https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ryzen-7-5800x.html

If you take a look at the manual for the MoBo you'll see the 5000/3000 series 
CPUs will be able to support up to four PCIe Gen 4 SSDs on the top PCIe x16 
slot, if you install e.g. a Hyper M.2 x16 expansion card:

https://www.asus.com/uk/motherboards-components/motherboards/accessories/
hyper-m-2-x16-card-v2/

but the same PCIe x16 slot will only be able to support up to three PCIe Gen 3 
SSDs with an expansion card, if you installed a Ryzen 5000/3000 G-Series 
processor which has the integrated graphics cores.


> The mobo has three PCIe X1 slots and a X4 slot.

Actually it is more complicated, because bandwidth is shared across the PCIe 
x16 slots:

https://www.asus.com/motherboards-components/motherboards/prime/prime-b550-plus-ac-hes/

It has four PCIe x16 slots (the top being the PCIe Gen 4 and the rest PCIe Gen 
3), but if any of the bottom 3 slots are occupied, the top slot will *only* 
run in x1 instead of x4 mode.  So you'll end up with four PCIe x16 slots, all 
running in x1 mode.

In addition, if you decide to plug in a NVMe card in the second M.2 port, then 
2 out of the 6 SATA ports will be disabled - their bandwidth eaten up by the 
second M.2 port.


> A SAS card
> would work just slower than when in X8 slot but the little PCIe 10 or 12
> port cards would work fine.  Gives me 30 drives total at least on the
> cards plus the four on the mobo, two goes away with using one of the
> PCIe slots most likely.  Bifurcation I think they call it when they
> share roadways.

I can't find a PCIe x8 slot on the above MoBo ...  :-/


> My thinking.  Build above now.  In a year, or two, I can build either
> the rig I was working on a lot cheaper or a even newer rig that is even
> faster if say AM6 socket CPUs have arrived.  Then the rig above with
> some hard drive options can become the new NAS box.  I can then move
> some drives out of the main rig, newer one a year or so down the road,
> and not need so many PCIe slots in the main rig, hopefully anyway.  I
> may even warm up to the idea of using USB for hard drives.  I'm
> surprised hard drives don't come with USB connections instead of SATA
> already.
> 
> The only downside, the NAS box will have to run 24/7 as well.  Then I
> have two puters running all the time.  To offset that, the combo above
> does pull a lot less power than my current rig.  Not a huge difference
> but fair amount.  Odds are the build a year or so down the road will
> also pull less power than current rig.  I could end up with same amount
> of power usage or less, even with two running instead of one.
> 
> I said it was a crazy idea.  LOL   This time tho, I'm sort of planning
> ahead instead of just coming up with a temporary fix all the time.  This
> is also a little cheaper but still faster.  Another big thing, newer as
> well.  My current rig is about 10 or 11 years old.  It may run another 5
> or so years but could go out anytime.  At least I'll have a newer rig
> not likely to let the smoke out.  Plus have a path to a more sane
> setup.  I just need to get one of those chia harvester cases that holds
> 40 or so hard drives.  ROFLMBO
> 
> Dale
> 
> :-)  :-) 

I think it was mentioned, but for your assumed requirements I suggest you take 
a look at refurbished workstations, or tower servers.  They are designed to 
run 24-7, can host a huge number of drives, have obscenely large amounts of 
quad/octa channel RAM (ECC too), come with the full variety of expansion 
options, disk caddies and adapters, receive OEM BIOS updates for ever and a 
day and seek to minimise power consumption.  They won't scream their head off 
in GHz, but more than compensate with multiple cores, dual CPUs, and more than 
enough PCIe lanes.

I seems to me you are currently looking to buy the equivalent of a Ferrari in 
terms of speed and technology, but intend to load it with rubble and use it as 
if it were a truck.  I'm exaggerating here, only to make a point.  It ought to 
be better overall if you buy a tool designed for the job you intend to use it.

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-05 Thread Dale
Mark Knecht wrote:
>
>
> On Sat, Jun 1, 2024 at 10:38 PM Dale  > wrote:
> >
> > Howdy, again,
> >
>
> Hi Dale,
>    As part of a little AI project I'm working on I ran across a
> relatively 
> inexpensive ideo you might want to look into. The basic idea is to 
> use a PXIx -> M.2 E Key card and on that card load an M2. E Key
> to SATA adapter. 
>
> First, an E Key adapter card:
>
> https://www.amazon.com/Ableconn-PEXM2150E-Express-Adapter-Socket/dp/B07D6ZCBHY
>
> And second, a 6 port E Key to SATA adapter:
>
> https://www.amazon.com/SATA3-0-Adapter-Expansion-Interface-Indicator/dp/B0B75JWXXS
>
> I have NO NO NO idea if this would actually work, or how well it would
> work, but
> for roughly $60 it might give you a path to the silly number of hard
> drives you want 
> to run. ;-)
>
> In my case I'm looking at this same card, but loaded with a neural
> network processor
> running Tensorflow Lite.
>
> Anyway, I thought you might be interested.
>
> Cheers,
> Mark


I have some PCIe cards that go up to 12 SATA ports.  In a X1 slot, they
not exactly speedy but they do work pretty well.  I try to balance the
number of drives to help with speed as much as I can.  I also put all
drives in one LVM on one card.  That way if one card stops working, the
whole LVM goes instead of just one or two of three drives.  You have a
idea tho with those two. 

I was digging around Ebay.  Ran up on a used combo and then had a crazy
idea.  I found a ASUS B550-plus AC-HES mobo that is AM4.  I took that
idea and started building a combo with new parts.  CPU, Ryzen 7 5800X
and my little 4 port video card if CPU has no video support.  AMD says
it does.  The mobo has three PCIe X1 slots and a X4 slot.  A SAS card
would work just slower than when in X8 slot but the little PCIe 10 or 12
port cards would work fine.  Gives me 30 drives total at least on the
cards plus the four on the mobo, two goes away with using one of the
PCIe slots most likely.  Bifurcation I think they call it when they
share roadways.

My thinking.  Build above now.  In a year, or two, I can build either
the rig I was working on a lot cheaper or a even newer rig that is even
faster if say AM6 socket CPUs have arrived.  Then the rig above with
some hard drive options can become the new NAS box.  I can then move
some drives out of the main rig, newer one a year or so down the road,
and not need so many PCIe slots in the main rig, hopefully anyway.  I
may even warm up to the idea of using USB for hard drives.  I'm
surprised hard drives don't come with USB connections instead of SATA
already.

The only downside, the NAS box will have to run 24/7 as well.  Then I
have two puters running all the time.  To offset that, the combo above
does pull a lot less power than my current rig.  Not a huge difference
but fair amount.  Odds are the build a year or so down the road will
also pull less power than current rig.  I could end up with same amount
of power usage or less, even with two running instead of one.

I said it was a crazy idea.  LOL   This time tho, I'm sort of planning
ahead instead of just coming up with a temporary fix all the time.  This
is also a little cheaper but still faster.  Another big thing, newer as
well.  My current rig is about 10 or 11 years old.  It may run another 5
or so years but could go out anytime.  At least I'll have a newer rig
not likely to let the smoke out.  Plus have a path to a more sane
setup.  I just need to get one of those chia harvester cases that holds
40 or so hard drives.  ROFLMBO

Dale

:-)  :-) 


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-05 Thread Mark Knecht
On Sat, Jun 1, 2024 at 10:38 PM Dale  wrote:
>
> Howdy, again,
>

Hi Dale,
   As part of a little AI project I'm working on I ran across a relatively
inexpensive ideo you might want to look into. The basic idea is to
use a PXIx -> M.2 E Key card and on that card load an M2. E Key
to SATA adapter.

First, an E Key adapter card:

https://www.amazon.com/Ableconn-PEXM2150E-Express-Adapter-Socket/dp/B07D6ZCBHY

And second, a 6 port E Key to SATA adapter:

https://www.amazon.com/SATA3-0-Adapter-Expansion-Interface-Indicator/dp/B0B75JWXXS

I have NO NO NO idea if this would actually work, or how well it would
work, but
for roughly $60 it might give you a path to the silly number of hard drives
you want
to run. ;-)

In my case I'm looking at this same card, but loaded with a neural network
processor
running Tensorflow Lite.

Anyway, I thought you might be interested.

Cheers,
Mark


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Wol wrote:
> On 04/06/2024 23:11, Dale wrote:
>> I started a thread maybe a decade ago about where computers were
>> going next.  Even then, clock frequency was getting close to the
>> limit.  At some point, a high frequency just can't go down a
>> motherboard and all its traces.  It seems to combat that problem,
>> they are putting as much of the fast stuff as they can on the CPU die
>> which can handle the higher frequencies.  It kinda makes sense
>> really.  I still wonder if one day, we buy a board with a chip,
>> memory slots and then a couple ports for video, data storage and user
>> inputs.  That's it.  In a way, it's not far from that now.
>
> I don't know how long ago it was, but it's quite a long time ago that
> silicon traces got so slim that that quantum tunnelling is a problem.
> Are we down to 5nm traces?
>
> Either way, we are down to traces about 5 or 10 atoms wide. At which
> point electrons can just "magically" quantum jump between tracks.
> Obviously, this is quite serious before if your ones and noughts
> consist of just a few electrons, and they can randomly jump about,
> you're going to get bit errors left right and centre.
>
> Cheers,
> Wol
>
> .
>


According to the link Mark posted, they are down to 3 nm on some Apple
M3 and M4 chips.  At that size, how much current can even flow through
that thing???  There are several others at 4 nm and 5nm.  Quite a few
actually.  It seems they found a way to break quantum law.  o_O  They
seem to have done that about 3 years ago. 

Somebody has a serious slide rule.  O_O 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Mark Knecht wrote:
>
>
> On Tue, Jun 4, 2024 at 3:12 PM Dale  > wrote:
> 
> > Holy crap.  That is amazing.  As you say, just one out of all of
> them and it is a bad chip, whether it is buggy or just plain dead.  I
> was expecting more like close to or into the billions.  I was not
> expecting that.  Can you imagine if a chip had to be made with
> discrete components, as in discrete transistors not a chip?  A
> motherboard would likely need to be measured in yards instead of
> inches.  Even that would be putting things as close together as they
> will fit.  That's a LOT of transistors.  Talk about a bulk discount. 
> ROFL  I checked out your link.  I knew the "process" as they call it
> was getting smaller but it is smaller than I thought.  They to the
> point where they almost don't exist.
> >
> > I started a thread maybe a decade ago about where computers were
> going next.  Even then, clock frequency was getting close to the
> limit.  At some point, a high frequency just can't go down a
> motherboard and all its traces.  It seems to combat that problem, they
> are putting as much of the fast stuff as they can on the CPU die which
> can handle the higher frequencies.  It kinda makes sense really.  I
> still wonder if one day, we buy a board with a chip, memory slots and
> then a couple ports for video, data storage and user inputs.  That's
> it.  In a way, it's not far from that now.
> >
> > Dang!!
> >
>
> Yeah, I loved working in that industry for the time I was there.
>
> While total transistor count is one measure, that changes a lot based
> on what the product is. A processor is larger than a network
> controller so you get big numbers due to the fact that it's a
> processor or a memory.
>
> To me the really amazing number is the transistor density. Stop and
> think about how small 1 millimeter is and then look at transistor
> density. The Intel 8080 had about 250 transistors in 1 mm sq. By the
> time the 8086 came along it's about 900 transistors per mm sq. The 486
> was around 7,000, but the most dense logic processes these days, the
> technology that a processor is built using is awe inspiring. The AMD
> MI300 will have something like 144,000,000 transistors per mm sq. 
>
> Moore's Law is an amazing thing...
>


They can do a lot with the space of a grain of rice.  O_O  It's amazing
that these chips even pass quality control really.  You either 100%
right or it's bad.  No errors.  I saw a video on TV once ages ago and
they were looking through a high powered microscope at the individual
components.  Not a magnifying glass or even a regular microscope.  A
high powered one.  Microscope was pretty large too.  That's getting tiny
when you need equipment like that just to see it.  That was also several
years ago, maybe 5 or 10 years or more. 

I wonder where we will be in another ten years. 

Dale

:-)  :-) 


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Wol

On 04/06/2024 23:11, Dale wrote:
I started a thread maybe a decade ago about where computers were going 
next.  Even then, clock frequency was getting close to the limit.  At 
some point, a high frequency just can't go down a motherboard and all 
its traces.  It seems to combat that problem, they are putting as much 
of the fast stuff as they can on the CPU die which can handle the higher 
frequencies.  It kinda makes sense really.  I still wonder if one day, 
we buy a board with a chip, memory slots and then a couple ports for 
video, data storage and user inputs.  That's it.  In a way, it's not far 
from that now.


I don't know how long ago it was, but it's quite a long time ago that 
silicon traces got so slim that that quantum tunnelling is a problem. 
Are we down to 5nm traces?


Either way, we are down to traces about 5 or 10 atoms wide. At which 
point electrons can just "magically" quantum jump between tracks. 
Obviously, this is quite serious before if your ones and noughts consist 
of just a few electrons, and they can randomly jump about, you're going 
to get bit errors left right and centre.


Cheers,
Wol



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Mark Knecht
On Tue, Jun 4, 2024 at 3:12 PM Dale  wrote:

> Holy crap.  That is amazing.  As you say, just one out of all of them and
it is a bad chip, whether it is buggy or just plain dead.  I was expecting
more like close to or into the billions.  I was not expecting that.  Can
you imagine if a chip had to be made with discrete components, as in
discrete transistors not a chip?  A motherboard would likely need to be
measured in yards instead of inches.  Even that would be putting things as
close together as they will fit.  That's a LOT of transistors.  Talk about
a bulk discount.  ROFL  I checked out your link.  I knew the "process" as
they call it was getting smaller but it is smaller than I thought.  They to
the point where they almost don't exist.
>
> I started a thread maybe a decade ago about where computers were going
next.  Even then, clock frequency was getting close to the limit.  At some
point, a high frequency just can't go down a motherboard and all its
traces.  It seems to combat that problem, they are putting as much of the
fast stuff as they can on the CPU die which can handle the higher
frequencies.  It kinda makes sense really.  I still wonder if one day, we
buy a board with a chip, memory slots and then a couple ports for video,
data storage and user inputs.  That's it.  In a way, it's not far from that
now.
>
> Dang!!
>

Yeah, I loved working in that industry for the time I was there.

While total transistor count is one measure, that changes a lot based on
what the product is. A processor is larger than a network controller so you
get big numbers due to the fact that it's a processor or a memory.

To me the really amazing number is the transistor density. Stop and think
about how small 1 millimeter is and then look at transistor density. The
Intel 8080 had about 250 transistors in 1 mm sq. By the time the 8086 came
along it's about 900 transistors per mm sq. The 486 was around 7,000, but
the most dense logic processes these days, the technology that a processor
is built using is awe inspiring. The AMD MI300 will have something like
144,000,000 transistors per mm sq.

Moore's Law is an amazing thing...


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Mark Knecht wrote:
>
>
> On Tue, Jun 4, 2024 at 2:24 PM Mark Knecht  > wrote:
> >
> >
> >
> > On Tue, Jun 4, 2024 at 2:04 PM Dale  > wrote:
> > 
> > > Well, there is a lot to be said about moving what used to be
> external to internal.  It does result in faster moves for pretty much
> everything.  Moving data from one side of a chip to another is faster
> than moving data out of a chip and then back in again.  I bet there is
> millions of transistors on a CPU chip nowadays.  I need to google that
> Threadripper CPU.  64 cores in the top model I think.  I bet it has a
> ton of transistors in it.
> > >
> >
> > Nah, think billions. I designed chips with millions of transistors
> in the early 1980's...
> >
>
> I see I was wrong. According to Wikipedia we are now at over 2
> trillion transistors for a processor and 5 trillion transistors for a
> memory.
>
> https://en.wikipedia.org/wiki/Transistor_count
>
> Keep in mind that if even 1 out of 2 trillion transistors doesn't work
> the processor could be completely dead or have bugs that are very hard
> to discover.
>
> That's life in the world of semiconductors...


Holy crap.  That is amazing.  As you say, just one out of all of them
and it is a bad chip, whether it is buggy or just plain dead.  I was
expecting more like close to or into the billions.  I was not expecting
that.  Can you imagine if a chip had to be made with discrete
components, as in discrete transistors not a chip?  A motherboard would
likely need to be measured in yards instead of inches.  Even that would
be putting things as close together as they will fit.  That's a LOT of
transistors.  Talk about a bulk discount.  ROFL  I checked out your
link.  I knew the "process" as they call it was getting smaller but it
is smaller than I thought.  They to the point where they almost don't
exist.

I started a thread maybe a decade ago about where computers were going
next.  Even then, clock frequency was getting close to the limit.  At
some point, a high frequency just can't go down a motherboard and all
its traces.  It seems to combat that problem, they are putting as much
of the fast stuff as they can on the CPU die which can handle the higher
frequencies.  It kinda makes sense really.  I still wonder if one day,
we buy a board with a chip, memory slots and then a couple ports for
video, data storage and user inputs.  That's it.  In a way, it's not far
from that now. 

Dang!!

Dale

:-)  :-) 


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Mark Knecht
On Tue, Jun 4, 2024 at 2:24 PM Mark Knecht  wrote:
>
>
>
> On Tue, Jun 4, 2024 at 2:04 PM Dale  wrote:
> 
> > Well, there is a lot to be said about moving what used to be external
to internal.  It does result in faster moves for pretty much everything.
Moving data from one side of a chip to another is faster than moving data
out of a chip and then back in again.  I bet there is millions of
transistors on a CPU chip nowadays.  I need to google that Threadripper
CPU.  64 cores in the top model I think.  I bet it has a ton of transistors
in it.
> >
>
> Nah, think billions. I designed chips with millions of transistors in the
early 1980's...
>

I see I was wrong. According to Wikipedia we are now at over 2 trillion
transistors for a processor and 5 trillion transistors for a memory.

https://en.wikipedia.org/wiki/Transistor_count

Keep in mind that if even 1 out of 2 trillion transistors doesn't work the
processor could be completely dead or have bugs that are very hard to
discover.

That's life in the world of semiconductors...


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Mark Knecht
On Tue, Jun 4, 2024 at 2:04 PM Dale  wrote:

> Well, there is a lot to be said about moving what used to be external to
internal.  It does result in faster moves for pretty much everything.
Moving data from one side of a chip to another is faster than moving data
out of a chip and then back in again.  I bet there is millions of
transistors on a CPU chip nowadays.  I need to google that Threadripper
CPU.  64 cores in the top model I think.  I bet it has a ton of transistors
in it.
>

Nah, think billions. I designed chips with millions of transistors in the
early 1980's...

And some of the newest AMD announcements are now talking about 192 cores
and 384 threads. I don't think they've revealed anything about whether
every core can run at full speed but they are quoting something like 5.8GHz
full speed so we shall see.



> Hope you stick around.  Sometimes things come up that are more Linuxy
than Gentooy.  LOL  I might add, I've read info on Ubuntu and other distros
that solved issues on my Gentoo rig.  Less so with the systemd switch now
but still happens on occasion.

I have no intention of going anywhere, nor do I intend to interfere with
specifically Gentoo-ish discussions. I did comment about the Python 3.12
topic today because I just went through something similar on Kubuntu which
made me learn more about Python's virtual environments. In my case a single
library involved in playing contract bridge wouldn't install but it wasn't
something provided by Kubuntu so I reported it on Github and it was fixed
today. Worked out nicely.

Best wishes,
Mark


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Mark Knecht wrote:
>
>
> On Tue, Jun 4, 2024 at 1:27 PM Dale  > wrote:
> >
> > Mark Knecht wrote:
> >
> >
> >
> > On Tue, Jun 4, 2024 at 11:35 AM Dale  > wrote:
> 
> Is the new way taking out what used to be called a northbridge or
> southbridge chip or both chips?
> 
>
> Not exactly taking out but rather repartitioning. Much of what used to
> be in the North Bridge, such
> as the memory interface, is now to a great extent in the processor. 
> The general idea of the 
> South Bridge originally was to interface to customer centric
> interfaces like PCI slots, USB and
> networking. That all still exists but the interface back to the
> processor has changed with a lot
> of it becoming high speed serial specs which reduce the number of pins
> on the processor.

Well, there is a lot to be said about moving what used to be external to
internal.  It does result in faster moves for pretty much everything. 
Moving data from one side of a chip to another is faster than moving
data out of a chip and then back in again.  I bet there is millions of
transistors on a CPU chip nowadays.  I need to google that Threadripper
CPU.  64 cores in the top model I think.  I bet it has a ton of
transistors in it.


>
> >
> > Thanks for the info.  I was wondering where you were.  _-O
> >
>
> I'm still here. Just don't post much as I'm not a Gentoo user and
> figure I have little to
> offer until a thread like this comes up. I used to do chip
> architecture for AMD, mostly
> South Bridge I/O stuff when it was still primarily in California.
> Those are now very much
> the old days through.
>
> Cheers,
> Mark

Hope you stick around.  Sometimes things come up that are more Linuxy
than Gentooy.  LOL  I might add, I've read info on Ubuntu and other
distros that solved issues on my Gentoo rig.  Less so with the systemd
switch now but still happens on occasion.

Dale

:-)  :-) 


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Mark Knecht
On Tue, Jun 4, 2024 at 1:27 PM Dale  wrote:
>
> Mark Knecht wrote:
>
>
>
> On Tue, Jun 4, 2024 at 11:35 AM Dale  wrote:

Is the new way taking out what used to be called a northbridge or
southbridge chip or both chips?


Not exactly taking out but rather repartitioning. Much of what used to be
in the North Bridge, such
as the memory interface, is now to a great extent in the processor.  The
general idea of the
South Bridge originally was to interface to customer centric interfaces
like PCI slots, USB and
networking. That all still exists but the interface back to the processor
has changed with a lot
of it becoming high speed serial specs which reduce the number of pins on
the processor.

>
> Thanks for the info.  I was wondering where you were.  _-O
>

I'm still here. Just don't post much as I'm not a Gentoo user and figure I
have little to
offer until a thread like this comes up. I used to do chip architecture for
AMD, mostly
South Bridge I/O stuff when it was still primarily in California. Those are
now very much
the old days through.

Cheers,
Mark


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Mark Knecht wrote:
>
>
> On Tue, Jun 4, 2024 at 11:35 AM Dale  > wrote:
> >
> > I'm thinking one good HBA will handle the current set of drives and give
> > decent speed at that in the Fractal case when the new mobo ends up
> > there.  The Fractal can handle 20 that I can count easily.  There's 4 in
> > the bottom, 11 in a tall stack for 3.5".  Then there is others for SSD
> > which I think can be changed to 3.5" drives.  Plus, I might could add a
> > couple cages in that massive case with a little rigging.  Then again,
> > there is this thing.
> >
> > https://www.ebay.com/itm/326063682173
> >
> > Pardon me.  I was drooling again.  LOL  In my new rig, I'm going to wait
> > until I get the thing here and then figure out what to do.  See what is
> > the best option.
> >
> > Then again.  This has cables that come with it and price is good.  Your
> > thoughts?
> >
> > https://www.ebay.com/itm/116051832382
> >
> > How many SATA drives that allow?  It says 8 ports.  I think each port is
> > 4 drives.  8 x 4 = 32.  Am I right?  Keep in mind, one of my large LVMs
> > is for torrent files.  It just goes as fast as the controller will
> > allow.  It just takes longer to share with a slower card.
> >
> > I need a really good mobo to make a NAS type rig out of and put it in
> > the Fractal case.  PCIe slots is a must tho.  Those just not happening.
> > May have to find a used server mobo or something.
> >
> > Dale
>
> Hi Dale,
>    I have resisted getting involved in this long thread. Frankly, I'm not
> really understanding the purpose of your 'new rig', and that's OK. It's
> your new rig.
>
>    I would like to point out a few things about current system 
> architectures that (I think) haven't come up in your thread, but 
> please disregard any and all of this if they have been addressed.
>
> 1) Probably the most important change that's happened over
> the last few years with PCIe is that now much of it is completely
> inside the CPU chip. When you investigate motherboard/CPU
> combinations be aware that your PCIe slots are EITHER CPU
> slots OR chipset slots. While all of them run at 8GHz, the ones
> coming out of the CPU have lower latency because data doesn't 
> have to go through the chipset. 
>
> 2) A PCIe 1x slot still has a lot of bandwidth - essentially 1GB/S.
> (8GHz / 8bits/byte == 1GB/S) How much sustained disk 
> bandwidth do you need? A single 4x slot is a heck of a lot 
> of data if the machine can support it.
>
> 3) Watch your CPU choice carefully and match it (within budget
> constraints) to the layout of the motherboard. If the motherboard
> is laid out for a 24 channel PCIx CPU and you buy a 20 channel
> CPU that's OK. However if it's laid out for a 24 channel CPU and
> you upgrade to a 32 channel CPU later then you will only be
> using 24 channels.
>
> 4) Keep in mind that in general a gaming motherboard is 
> designed to get data to the GPU as fast as possible so that 
> 16x PCIx slot will be using the bulk of the CPU channels. As
> you seem to be heavily focused on huge numbers of drives
> that may be an important consideration for you.
>
> Lastly, none of what I say above is guaranteed to be correct.
> Continue to do your own research and ask questions. You'll
> get there.
>
> Good luck with whatever choices you make.
>
> Over and out,
> Marj


I wasn't completely understanding how that worked but I think Rich
mentioned something about it when we were discussing the 7600 CPU.  It
just didn't sink in.  I think the CPU I picked will be OK.  Later I'll
move up to the 7900 or 7950 CPU.  That could max the mobo out or with a
firmware update it may can go higher.  Is the new way taking out what
used to be called a northbridge or southbridge chip or both chips?  My
current Gigabyte mobo has both of those.  The NAS box does too.  In the
picture of the ASUS mob, I don't see either but it could be under one of
those metal heat sinks.  That would explain a lot tho. 

To me, it seems as if mobo makers are focused on gaming and things
revolving around USB.  Neither of which is something I need.  I need
PCIe slots, not USB ports that are insanely fast for my use.  Keep in
mind, I plug my mouse and keyboard into a USB port and might use at most
two USB ports for USB sticks or memory cards from trail cameras.  USB
2.0 is more than enough for those and what I do with them.  The trail
camera cards are slow anyways.  The last couple USB sticks I bought are
3.0 but my ports on front of the case are 2.0.  It's still plenty fast
enough for me. 

I wish a mobo would pop up with less USB ports but several PCIe x1 and a
couple PCIe x8 slots.  I'd be in heaven.  :-D 

Thanks for the info.  I was wondering where you were.  _-O

Dale

:-)  :-) 


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Rich Freeman wrote:
> On Tue, Jun 4, 2024 at 2:34 PM Dale  wrote:
>> https://www.ebay.com/itm/116051832382
>>
>> How many SATA drives that allow?  It says 8 ports.  I think each port is
>> 4 drives.  8 x 4 = 32.  Am I right?
> It has two SAS ports, which support 8 SATA drives.  Marketing...
>
> I can't vouch for compatibility - I'd do some google research, but if
> you don't find bad news this would fit your purposes.  A 4x slot
> should handle 8 HDDs just fine.
>


Well, 8 drives isn't enough.  Maybe by the time I get to where the new
ASUS mobo is in the Fractal case, I'll have better options.  Maybe.  I
can hope anyway.  It is several years down the road yet.  I was going to
grab a good deal, well, it was a good deal.  ;-D

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Mark Knecht
On Tue, Jun 4, 2024 at 11:35 AM Dale  wrote:
>
> I'm thinking one good HBA will handle the current set of drives and give
> decent speed at that in the Fractal case when the new mobo ends up
> there.  The Fractal can handle 20 that I can count easily.  There's 4 in
> the bottom, 11 in a tall stack for 3.5".  Then there is others for SSD
> which I think can be changed to 3.5" drives.  Plus, I might could add a
> couple cages in that massive case with a little rigging.  Then again,
> there is this thing.
>
> https://www.ebay.com/itm/326063682173
>
> Pardon me.  I was drooling again.  LOL  In my new rig, I'm going to wait
> until I get the thing here and then figure out what to do.  See what is
> the best option.
>
> Then again.  This has cables that come with it and price is good.  Your
> thoughts?
>
> https://www.ebay.com/itm/116051832382
>
> How many SATA drives that allow?  It says 8 ports.  I think each port is
> 4 drives.  8 x 4 = 32.  Am I right?  Keep in mind, one of my large LVMs
> is for torrent files.  It just goes as fast as the controller will
> allow.  It just takes longer to share with a slower card.
>
> I need a really good mobo to make a NAS type rig out of and put it in
> the Fractal case.  PCIe slots is a must tho.  Those just not happening.
> May have to find a used server mobo or something.
>
> Dale

Hi Dale,
   I have resisted getting involved in this long thread. Frankly, I'm not
really understanding the purpose of your 'new rig', and that's OK. It's
your new rig.

   I would like to point out a few things about current system
architectures that (I think) haven't come up in your thread, but
please disregard any and all of this if they have been addressed.

1) Probably the most important change that's happened over
the last few years with PCIe is that now much of it is completely
inside the CPU chip. When you investigate motherboard/CPU
combinations be aware that your PCIe slots are EITHER CPU
slots OR chipset slots. While all of them run at 8GHz, the ones
coming out of the CPU have lower latency because data doesn't
have to go through the chipset.

2) A PCIe 1x slot still has a lot of bandwidth - essentially 1GB/S.
(8GHz / 8bits/byte == 1GB/S) How much sustained disk
bandwidth do you need? A single 4x slot is a heck of a lot
of data if the machine can support it.

3) Watch your CPU choice carefully and match it (within budget
constraints) to the layout of the motherboard. If the motherboard
is laid out for a 24 channel PCIx CPU and you buy a 20 channel
CPU that's OK. However if it's laid out for a 24 channel CPU and
you upgrade to a 32 channel CPU later then you will only be
using 24 channels.

4) Keep in mind that in general a gaming motherboard is
designed to get data to the GPU as fast as possible so that
16x PCIx slot will be using the bulk of the CPU channels. As
you seem to be heavily focused on huge numbers of drives
that may be an important consideration for you.

Lastly, none of what I say above is guaranteed to be correct.
Continue to do your own research and ask questions. You'll
get there.

Good luck with whatever choices you make.

Over and out,
Marj


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 2:34 PM Dale  wrote:
>
> https://www.ebay.com/itm/116051832382
>
> How many SATA drives that allow?  It says 8 ports.  I think each port is
> 4 drives.  8 x 4 = 32.  Am I right?

It has two SAS ports, which support 8 SATA drives.  Marketing...

I can't vouch for compatibility - I'd do some google research, but if
you don't find bad news this would fit your purposes.  A 4x slot
should handle 8 HDDs just fine.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Rich Freeman wrote:
> On Tue, Jun 4, 2024 at 7:56 AM Dale  wrote:
>> Rich Freeman wrote:
>>> On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
>>>
 The little SATA controllers I currently use tend to only need PCIe x1.
 That is slower but at least it works.
>>> The LSI cards will work just as well in a 1x.  That is, assuming you
>>> only plug as many drives into it as you do one of those SATA HBAs and
>>> don't expect miraculous bandwidth out of them.
>> So the LSI controllers are a option, just a little slower.  Cool.
> No, they're just as fast if not faster than the SATA controllers.
>
> The LSI HBA in a 1x slot will be slower than the LSI HBA in a 4x slot,
> which will be slower than an LSI HBA in an 8x slot.  That is, assuming
> you're bandwidth-limited.
>
> The SATA controller you're used to can't put any more data through a
> 1x slot than an LSI HBA.  The reason the HBA has an 8x slot is so that
> it can move larger amounts of data.  The SATA board is marketed to
> consumers who care more about cramming more drives into their system
> than whether those drives operate at full performance.  They also tend
> to have fewer ports-  if you only are adding 2 SATA ports, then a 1x
> slot isn't a huge bottleneck.
>
> The bottom line is that if you are using an older v2 HBA, then it can
> transfer 500MB/s per PCIe lane.  If you're only transferring 100MB/s
> then it doesn't matter how many lanes you have.  If you're
> transferring 4GB/s then the number of lanes becomes critical if you
> need to sustain that bandwidth.  If you're using the HBA to cram 4
> 5400 RPM HDDs into your system that is a very different demand
> compared to adding 16 Micron enterprise SSDs.

That is what was in my head.  It just didn't quite make it to the
keyboard.  Basically, it will be slower in a x4 slot compared to a x8
slot.  Of course, x1 will be slower than them all.  I keep forgetting
that you can put a x4 card in a x1 slot and it still work, just slower. 
That also explains why they have mechanical x16 slots for electrical x4
cards.  Why not just cut out the end so the x part is more obvious by
the length of the slot but any card will fit???  As it is, you have to
look for the pins to see if it is x1, x4, x8 or a full x16 slot. 

>> The
>> main reason I wanted to go with the SAS to SATA controller, number of
>> drives I can connect.  Keep in mind, I need to get to almost 20 drives
>> but with only one card.
> Yup, getting 8/16 SATA disks on one HBA isn't a problem.  Depending on
> how fast those drives are and your data access patterns, the number of
> PCIe lanes might or might not be an issue.  Just add up your total
> data transfer rate and look up the PCIe specs.
>

I'm thinking one good HBA will handle the current set of drives and give
decent speed at that in the Fractal case when the new mobo ends up
there.  The Fractal can handle 20 that I can count easily.  There's 4 in
the bottom, 11 in a tall stack for 3.5".  Then there is others for SSD
which I think can be changed to 3.5" drives.  Plus, I might could add a
couple cages in that massive case with a little rigging.  Then again,
there is this thing. 

https://www.ebay.com/itm/326063682173

Pardon me.  I was drooling again.  LOL  In my new rig, I'm going to wait
until I get the thing here and then figure out what to do.  See what is
the best option. 

Then again.  This has cables that come with it and price is good.  Your
thoughts?

https://www.ebay.com/itm/116051832382

How many SATA drives that allow?  It says 8 ports.  I think each port is
4 drives.  8 x 4 = 32.  Am I right?  Keep in mind, one of my large LVMs
is for torrent files.  It just goes as fast as the controller will
allow.  It just takes longer to share with a slower card. 

I need a really good mobo to make a NAS type rig out of and put it in
the Fractal case.  PCIe slots is a must tho.  Those just not happening. 
May have to find a used server mobo or something. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 8:03 AM Dale  wrote:
>
> I wish I could get one that comes with cables
> that go from the card to SATA drives as a set.  That way I know that
> part is right.

I'm pretty sure there are only two standards - one for external, and
one for internal.  Maybe there are different connector sizes.  Just
look up the datasheet on the HBA and it will tell you what connectors
it has.  Then google for the breakout to SATA cable.

> At this point, it seems any LSI card will likely work.

I wouldn't make that claim.  They're definitely one of the biggest
brand but some of their fancier products can be less oriented towards
what you're doing.  Some might require weird disk formats even if
you're exposing individual disks, making the disks unreadable with
other controllers.

> I still wish that mobo had more PCIe slots. It just gives more options
> down the road.  The speed and memory will be nice tho.  When I finally
> hit the order or pay button.  :/

Unfortunately, DIMMs and PCIe slots have become the hallmark of
workstation and server motherboards.  I think this is due to the
adoption of the SOC model on the CPUs.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 7:56 AM Dale  wrote:
>
> Rich Freeman wrote:
> > On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
> >
> >> The little SATA controllers I currently use tend to only need PCIe x1.
> >> That is slower but at least it works.
> > The LSI cards will work just as well in a 1x.  That is, assuming you
> > only plug as many drives into it as you do one of those SATA HBAs and
> > don't expect miraculous bandwidth out of them.
>
> So the LSI controllers are a option, just a little slower.  Cool.

No, they're just as fast if not faster than the SATA controllers.

The LSI HBA in a 1x slot will be slower than the LSI HBA in a 4x slot,
which will be slower than an LSI HBA in an 8x slot.  That is, assuming
you're bandwidth-limited.

The SATA controller you're used to can't put any more data through a
1x slot than an LSI HBA.  The reason the HBA has an 8x slot is so that
it can move larger amounts of data.  The SATA board is marketed to
consumers who care more about cramming more drives into their system
than whether those drives operate at full performance.  They also tend
to have fewer ports-  if you only are adding 2 SATA ports, then a 1x
slot isn't a huge bottleneck.

The bottom line is that if you are using an older v2 HBA, then it can
transfer 500MB/s per PCIe lane.  If you're only transferring 100MB/s
then it doesn't matter how many lanes you have.  If you're
transferring 4GB/s then the number of lanes becomes critical if you
need to sustain that bandwidth.  If you're using the HBA to cram 4
5400 RPM HDDs into your system that is a very different demand
compared to adding 16 Micron enterprise SSDs.

> The
> main reason I wanted to go with the SAS to SATA controller, number of
> drives I can connect.  Keep in mind, I need to get to almost 20 drives
> but with only one card.

Yup, getting 8/16 SATA disks on one HBA isn't a problem.  Depending on
how fast those drives are and your data access patterns, the number of
PCIe lanes might or might not be an issue.  Just add up your total
data transfer rate and look up the PCIe specs.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Rich Freeman wrote:
> On Tue, Jun 4, 2024 at 6:19 AM Dale  wrote:
>> You ever seen one of these?
>>
>> https://www.ebay.com/itm/274651287952
>>
>> Is that taking one of those fast USB ports and converting it to a PCIe
>> x1 slot?  Am I seeing that right?
>>
> No, it is taking a PCIe 1x slot, using a USB cable, and converting it
> into 4 PCIe 16x slots (likely wired at 1x).  I doubt that it is using
> standard USB though it might be.  Thunderbolt can be used to do PCIe
> over a USB-C form factor I think, and there is some chance this device
> is making use of that.
>
> I've used a similar device to connect an 8x LSI HBA to a 1x PCIe slot
> on an rk3399 ARM SBC (I needed the riser to provide additional power).
> In that case it was just one slot to one slot, so it didn't need a
> switch.  To run 4 slots off of a single 1x would require a PCIe
> switch, which that board no-doubt uses.
>
> PCIe is not unlike ethernet - there is quite a bit you can do with it.
> The main problem is that there just isn't much consumer-oriented
> hardware floating around - lots of specialized solutions with chips
> embedded in them, which are harder to adapt to other problems.
> Another issue is that cases/etc rarely provide convenient mounting
> solutions when you start using this stuff.
>
> Take the motherboard you're using.  That has PCIe v5 in one of the M.2
> slots, and PCIe v4 in most of the rest of the interfaces.  There is no
> reason you couldn't run all that into a switch and have a TON of 8x
> PCIe v2 slots to use with older HBAs and such.  That one M.2 v5 slot
> could run 4 8x PCIe v2 slots just on its own.  You just need the right
> adapter, and those are hard to find from reputable companies.  There
> is all manner of stuff on ali express and so on, but now you're mining
> forums to figure out if they actually work before you buy them.
>


I was trying to figure out what it was doing but wasn't sure.  I've just
never seen one of those before. 

At some point, I'm going to find a SAS card and post about it.  When I'm
ready to buy one that is.  I wish I could get one that comes with cables
that go from the card to SATA drives as a set.  That way I know that
part is right.  At this point, it seems any LSI card will likely work. 
Just avoid RAID. 

I still wish that mobo had more PCIe slots. It just gives more options
down the road.  The speed and memory will be nice tho.  When I finally
hit the order or pay button.  :/

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Rich Freeman wrote:
> On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
>> I did some more digging.  It seems that all the LSI SAS cards I found
>> need a PCIe x8 slot.  The only slot available is the one intended for
>> video.
> The board you linked has 2 4x slots that are physically 16x, so the
> card should work fine in those, just at 4x speed.
>
>> I'd rather not
>> use it on the new build because I've thought about having another
>> monitor added for desktop use so I would need three ports at least.
> You actually could put the video card in one of those 4x slots if you
> wanted to prioritize IO for the HBA, though I would only do that if
> you weren't playing games, and had a lot of SSDs plugged into the HBA.
> Crypto miners routinely run GPUs in 1x slots.
>
>> The little SATA controllers I currently use tend to only need PCIe x1.
>> That is slower but at least it works.
> The LSI cards will work just as well in a 1x.  That is, assuming you
> only plug as many drives into it as you do one of those SATA HBAs and
> don't expect miraculous bandwidth out of them.
>
> The only gotcha is that on the board you linked the 1x slot doesn't
> say it can accomodate cards larger than 1x, so you might need a riser
> and someplace to put the card.
>


So the LSI controllers are a option, just a little slower.  Cool.  The
main reason I wanted to go with the SAS to SATA controller, number of
drives I can connect.  Keep in mind, I need to get to almost 20 drives
but with only one card.  Almost none of the mobos nowadays has more than
a couple PCIe slots.  Right now in my current rig, I have two little
SATA cards and put one LVM on one card, another LVM on the other card. 
To kinda balance out the speed.  What little speed there is.  ;-) 

At least I know it is a option even if I don't or can't use the video
slot. 

Thanks.

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 6:19 AM Dale  wrote:
>
> You ever seen one of these?
>
> https://www.ebay.com/itm/274651287952
>
> Is that taking one of those fast USB ports and converting it to a PCIe
> x1 slot?  Am I seeing that right?
>

No, it is taking a PCIe 1x slot, using a USB cable, and converting it
into 4 PCIe 16x slots (likely wired at 1x).  I doubt that it is using
standard USB though it might be.  Thunderbolt can be used to do PCIe
over a USB-C form factor I think, and there is some chance this device
is making use of that.

I've used a similar device to connect an 8x LSI HBA to a 1x PCIe slot
on an rk3399 ARM SBC (I needed the riser to provide additional power).
In that case it was just one slot to one slot, so it didn't need a
switch.  To run 4 slots off of a single 1x would require a PCIe
switch, which that board no-doubt uses.

PCIe is not unlike ethernet - there is quite a bit you can do with it.
The main problem is that there just isn't much consumer-oriented
hardware floating around - lots of specialized solutions with chips
embedded in them, which are harder to adapt to other problems.
Another issue is that cases/etc rarely provide convenient mounting
solutions when you start using this stuff.

Take the motherboard you're using.  That has PCIe v5 in one of the M.2
slots, and PCIe v4 in most of the rest of the interfaces.  There is no
reason you couldn't run all that into a switch and have a TON of 8x
PCIe v2 slots to use with older HBAs and such.  That one M.2 v5 slot
could run 4 8x PCIe v2 slots just on its own.  You just need the right
adapter, and those are hard to find from reputable companies.  There
is all manner of stuff on ali express and so on, but now you're mining
forums to figure out if they actually work before you buy them.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Rich Freeman wrote:


Rich,

You ever seen one of these? 

https://www.ebay.com/itm/274651287952

Is that taking one of those fast USB ports and converting it to a PCIe
x1 slot?  Am I seeing that right? 

USB ports are going to wash dishes before long.  LOL

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
>
> I did some more digging.  It seems that all the LSI SAS cards I found
> need a PCIe x8 slot.  The only slot available is the one intended for
> video.

The board you linked has 2 4x slots that are physically 16x, so the
card should work fine in those, just at 4x speed.

> I'd rather not
> use it on the new build because I've thought about having another
> monitor added for desktop use so I would need three ports at least.

You actually could put the video card in one of those 4x slots if you
wanted to prioritize IO for the HBA, though I would only do that if
you weren't playing games, and had a lot of SSDs plugged into the HBA.
Crypto miners routinely run GPUs in 1x slots.

> The little SATA controllers I currently use tend to only need PCIe x1.
> That is slower but at least it works.

The LSI cards will work just as well in a 1x.  That is, assuming you
only plug as many drives into it as you do one of those SATA HBAs and
don't expect miraculous bandwidth out of them.

The only gotcha is that on the board you linked the 1x slot doesn't
say it can accomodate cards larger than 1x, so you might need a riser
and someplace to put the card.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Dale
Dale wrote:
> Rich Freeman wrote:
>> On Mon, Jun 3, 2024 at 11:17 AM Dale  wrote:
>>> When you say HBA.  Is this what you mean?
>>>
>>> https://www.ebay.com/itm/125486868824
>>>
>> Yes.  Typically they have mini-SAS interfaces, and you can get a
>> breakout cable that will attach one of those to 4x SATA ports.
>>
>> Some things to keep in mind when shopping for HBAs:
>> 1. Check for linux compatibility.  Not every card has great support.
>> 2. Flashing the firmware may require windows, and this may be
>> necessary to switch a card between RAID mode and IT mode, the latter
>> being what you almost certainly want, and the former being what most
>> enterprise admins tend to have them flashed as.  IT mode basically
>> exposes all the drives that are attached as a bunch of standalone
>> drivers, while RAID mode will just expose a limited number of virtual
>> interfaces and the card bundles the disks into arrays (and if the card
>> dies, good luck ever reading those disks again until you reformat
>> them).
>> 3. Be aware they often use a ton of power.
>> 4. Take note of internal vs external ports.  You can get either.  They
>> need different cables, and if your disks are inside the case having
>> the ports on the outside isn't technically a show-stopper but isn't
>> exactly convenient.
>> 5. Take note of the interface speed and size.  The card you linked is
>> (I think) an 8x v2 card.  PCIe will auto-negotiate down, so if you
>> plug that card into your v4 4x slot it will run at v2 4x, which is
>> 2GB/s bandwidth.  That's half of what it is capable of, but probably
>> not a big issue.  If you want to plug 16 enterprise SSDs into it then
>> you'll definitely hit the PCIe bottleneck, but if you plug 16 consumer
>> 7200RPM HDDs into it you're only going to hit 2GB/s under fairly ideal
>> circumstances, and with fewer HDDs you couldn't hit it at all.  If you
>> pay more you'll get a newer PCIe revision, which means more bandwidth
>> for a given number of lanes.
>> 6. Check for hardware compatibility too.  Stuff from 1st parties like
>> Dell/etc might be fussy about wanting to be in a Dell server with
>> weird firmware interactions with the motherboard.  A 3rd party card
>> like LSI probably is less of an issue here, but check.
>>
>> Honestly, part of why I went the distributed filesystem route (Ceph
>> these days) is to avoid dealing with this sort of nonsense.  Granted,
>> now I'm looking to use more NVMe and if you want high capacity NVMe
>> that tends to mean U.2, and dealing with bifurcation and PCIe
>> switches, and just a different sort of nonsense
>>
>
> I've read that LSI is best.  I also noticed when looking a good while
> back that some Ebay listings said they flashed a card to IT mode.  I
> wasn't quite sure what that was until I saw the other mode was RAID.  I
> figured that was the JBOD mode basically.  This is used, which is fine,
> and pricey.  I'm mostly just trying to see if I'm headed down the right
> path.  There could be cheaper or better. 
>
> https://www.ebay.com/itm/391906134343
>
> That says it can support up to 256 devices.  Would there be a slot on
> the mobo, except for video slot, that it would fit into?  Next
> question.  Cables.  What do I search for to get the right cable?  It
> appears to be a bare card.  Are cables standard and the same or depends
> on card, brand etc?
>
> On the ASUS mobo.  Is that as good as I'm going to find?  I've yet to
> find anything better. 
>
> Dale
>
> :-)  :-) 
>


I did some more digging.  It seems that all the LSI SAS cards I found
need a PCIe x8 slot.  The only slot available is the one intended for
video.  Even my current rig doesn't have a slot available except for
video, which has no built in video at all on that mobo.  I'd rather not
use it on the new build because I've thought about having another
monitor added for desktop use so I would need three ports at least. 
Sometimes I need to move files around and be able to see file names,
sizes, dates and all to know what to move where.  Having a second
monitor would make that a lot easier.  Right now, I tend to switch from
one desktop to another.  It's time consuming when you have lots of files
to sort though one or just a few at a time. 

The little SATA controllers I currently use tend to only need PCIe x1. 
That is slower but at least it works.  I'm fine with the current speed
of the drives even over PCIe x1 and I think v2.0.  I would like to have
the SAS controller but I'll need to see how things pan out for a while
first.  If needed, I may use the video slot if I decide not to add
another monitor.

I'm still in a holding pattern on buying the ASUS mobo.  I don't expect
to find better.  It's just not really what I need other than being newer
and faster.  More memory too.  Rest is a downgrade. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Frank Steinmetzger
Am Sun, Jun 02, 2024 at 08:27:57AM -0500 schrieb Dale:

> I thought of something on the m.2 thing.  I plan to put my OS on it.  I
> usually use tmpfs and compile in memory anyway but do have some set to
> use spinning rust. Once I get 128GB installed, I should be able to do
> that with all packages anyway but still, I had a question.  Should I put
> the portage work directory on a spinning rust drive to save wear and
> tear on the SSD or have they got to the point now that doesn't matter
> anymore?  I know all the SSD devices have improved a lot since the first
> ones came out. 

We’ve had this topic before. You can do some archaeology with dumpe2fs and 
extrapolate:

$ dumpe2fs -h /dev/mapper/vg-root
...
Filesystem created:   Sun Apr 17 16:47:03 2022
...
Lifetime writes:  877 GB


So that’s around 900 GB in 2 years. This is an Arch system, so may not 
experience quite as many writes from updates (especially not from any 
on-disk emerging), but Arch does have its own share of volume-heavy 
upgrades. Just today, after being away on travel for 11 days, I had to 
download 2.5 GB and unpack over 8 GB of files.


My home partition has accumulated 2600 GB in the same time. Firstly, it’s 
200 GB in size vs. 45 GB for the root system. And secondly, sometimes the 
baloo file extractor runs amok and keeps writing gigabytes of index files. 
It’s an Evo 970 Plus 2 TB, so I just scratched its guaranteed lifetime write 
amount.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

We don’t put a single drop of alcohol on the table...
we pour very cautiously.


signature.asc
Description: PGP signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Frank Steinmetzger
Am Sun, Jun 02, 2024 at 12:38:13AM -0500 schrieb Dale:

> I'll also max the memory out too.  I'm
> unclear on the max memory tho.  One place shows 128GB, hence two 32GB
> sticks.  The out of stock Newegg one claims 256GB, which would be nice. 
> I'm not sure what to think on memory.  Anyway.  If the thing is fast
> enough, I may do the memory first then CPU later.  If I need a faster
> CPU, I may do it first then the memory.

One interesting fact: four sticks run slower than two sticks. I don’t 
remember the exact technical reason, but it is so. Two sticks can run at the 
maximum stock speed (i.e. without overclocking profiles, which is 5200 MT/s 
for the 7600X’s memory controller). But four sticks are clocked lower.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

It’s the same inside as it is outside, just different.


signature.asc
Description: PGP signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Dale
Rich Freeman wrote:
> On Mon, Jun 3, 2024 at 11:17 AM Dale  wrote:
>> When you say HBA.  Is this what you mean?
>>
>> https://www.ebay.com/itm/125486868824
>>
> Yes.  Typically they have mini-SAS interfaces, and you can get a
> breakout cable that will attach one of those to 4x SATA ports.
>
> Some things to keep in mind when shopping for HBAs:
> 1. Check for linux compatibility.  Not every card has great support.
> 2. Flashing the firmware may require windows, and this may be
> necessary to switch a card between RAID mode and IT mode, the latter
> being what you almost certainly want, and the former being what most
> enterprise admins tend to have them flashed as.  IT mode basically
> exposes all the drives that are attached as a bunch of standalone
> drivers, while RAID mode will just expose a limited number of virtual
> interfaces and the card bundles the disks into arrays (and if the card
> dies, good luck ever reading those disks again until you reformat
> them).
> 3. Be aware they often use a ton of power.
> 4. Take note of internal vs external ports.  You can get either.  They
> need different cables, and if your disks are inside the case having
> the ports on the outside isn't technically a show-stopper but isn't
> exactly convenient.
> 5. Take note of the interface speed and size.  The card you linked is
> (I think) an 8x v2 card.  PCIe will auto-negotiate down, so if you
> plug that card into your v4 4x slot it will run at v2 4x, which is
> 2GB/s bandwidth.  That's half of what it is capable of, but probably
> not a big issue.  If you want to plug 16 enterprise SSDs into it then
> you'll definitely hit the PCIe bottleneck, but if you plug 16 consumer
> 7200RPM HDDs into it you're only going to hit 2GB/s under fairly ideal
> circumstances, and with fewer HDDs you couldn't hit it at all.  If you
> pay more you'll get a newer PCIe revision, which means more bandwidth
> for a given number of lanes.
> 6. Check for hardware compatibility too.  Stuff from 1st parties like
> Dell/etc might be fussy about wanting to be in a Dell server with
> weird firmware interactions with the motherboard.  A 3rd party card
> like LSI probably is less of an issue here, but check.
>
> Honestly, part of why I went the distributed filesystem route (Ceph
> these days) is to avoid dealing with this sort of nonsense.  Granted,
> now I'm looking to use more NVMe and if you want high capacity NVMe
> that tends to mean U.2, and dealing with bifurcation and PCIe
> switches, and just a different sort of nonsense
>


I've read that LSI is best.  I also noticed when looking a good while
back that some Ebay listings said they flashed a card to IT mode.  I
wasn't quite sure what that was until I saw the other mode was RAID.  I
figured that was the JBOD mode basically.  This is used, which is fine,
and pricey.  I'm mostly just trying to see if I'm headed down the right
path.  There could be cheaper or better. 

https://www.ebay.com/itm/391906134343

That says it can support up to 256 devices.  Would there be a slot on
the mobo, except for video slot, that it would fit into?  Next
question.  Cables.  What do I search for to get the right cable?  It
appears to be a bare card.  Are cables standard and the same or depends
on card, brand etc?

On the ASUS mobo.  Is that as good as I'm going to find?  I've yet to
find anything better. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Rich Freeman
On Mon, Jun 3, 2024 at 11:17 AM Dale  wrote:
>
> When you say HBA.  Is this what you mean?
>
> https://www.ebay.com/itm/125486868824
>

Yes.  Typically they have mini-SAS interfaces, and you can get a
breakout cable that will attach one of those to 4x SATA ports.

Some things to keep in mind when shopping for HBAs:
1. Check for linux compatibility.  Not every card has great support.
2. Flashing the firmware may require windows, and this may be
necessary to switch a card between RAID mode and IT mode, the latter
being what you almost certainly want, and the former being what most
enterprise admins tend to have them flashed as.  IT mode basically
exposes all the drives that are attached as a bunch of standalone
drivers, while RAID mode will just expose a limited number of virtual
interfaces and the card bundles the disks into arrays (and if the card
dies, good luck ever reading those disks again until you reformat
them).
3. Be aware they often use a ton of power.
4. Take note of internal vs external ports.  You can get either.  They
need different cables, and if your disks are inside the case having
the ports on the outside isn't technically a show-stopper but isn't
exactly convenient.
5. Take note of the interface speed and size.  The card you linked is
(I think) an 8x v2 card.  PCIe will auto-negotiate down, so if you
plug that card into your v4 4x slot it will run at v2 4x, which is
2GB/s bandwidth.  That's half of what it is capable of, but probably
not a big issue.  If you want to plug 16 enterprise SSDs into it then
you'll definitely hit the PCIe bottleneck, but if you plug 16 consumer
7200RPM HDDs into it you're only going to hit 2GB/s under fairly ideal
circumstances, and with fewer HDDs you couldn't hit it at all.  If you
pay more you'll get a newer PCIe revision, which means more bandwidth
for a given number of lanes.
6. Check for hardware compatibility too.  Stuff from 1st parties like
Dell/etc might be fussy about wanting to be in a Dell server with
weird firmware interactions with the motherboard.  A 3rd party card
like LSI probably is less of an issue here, but check.

Honestly, part of why I went the distributed filesystem route (Ceph
these days) is to avoid dealing with this sort of nonsense.  Granted,
now I'm looking to use more NVMe and if you want high capacity NVMe
that tends to mean U.2, and dealing with bifurcation and PCIe
switches, and just a different sort of nonsense

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Dale
Rich Freeman wrote:
> On Mon, Jun 3, 2024 at 7:06 AM Dale  wrote:
>> I still wish it had more PCIe slots.  I'm considering switching to a SAS
>> card and then with cables change that to SATA.  I think I can get one
>> card and have most if not all of the drives the Fractal case will hold
>> hooked to it.
>> ...
>> Honestly, I wouldn't mind one m.2 for the OS.  I could just as well use
>> a SATA SSD to do that tho.
> First, I will point out that an M.2 gen5 (or even gen4) NVMe will
> perform VASTLY better than a SATA SSD.  That 990 Evo (which isn't an
> enterprise drive) boasts 800k IOPS.  The fastest SATA SSD I could find
> tops out at around 90k IOPS.  There is simply no comparison between
> SATA and NVMe, though whether that IOPS performance matters to you is
> another matter.
>
> As far as IO goes, your motherboard has the following PCIe interfaces:
> 16x PCIe v4
> 2 4x PCIe v4
> 1 M.2 v5
> 2 M.2 v4
>
> 4x should be enough for an HBA if you're running hard drives, so with
> bifurcation, risers, and so on, you could get 9 HBAs into that system,
> with 8-16 SATA ports on each, and with hard drives I imagine they'd
> perform as good as hard drives possibly can.  It would be a mess of
> adapters and cables, but you can certainly do it if you have the room
> in the case for that mess.
>
> Those 3 PCIe slots are all 16x physically, so you could easily get 3
> HBAs into the system without even having to resort to risers.  That's
> already 24-48 SATA ports.
>
> Sure, it isn't quite as convenient as the IO options of the past, but
> it isn't like you can't get PCIe in a system that has all those lanes.
> The motherboard has already switched most of the v5 down to v4 in
> exchange for more lanes, which honestly is a better option for you
> anyway as pretty much only gaming GPUs can use v5 effectively anyway.
>

Keep in mind, my current rig has the OS on a SATA II drive I think. 
This is part of smartctl -i for it.

SATA Version is:  SATA 2.6, 3.0 Gb/s

Basically, that is a really slow drive.  Almost anything is faster than
it. LOL  A SATA SSD would be a serious improvement.  Even a slow m.2
drive would do cartwheels around it.  ROFL

So basically, it is your opinion that the ASUS mobo is about as good as
I'm going to get unless I go with some server type mobo that costs a ton
of money?  If so, this is a sad day.  I guess I need to just order the
thing and say a serious prayer.  At least for the time being, my old
Gigabyte mobo will be in the Fractal case.  Keep in mind tho, I got
about 10 drives that need to hook to the new ASUS mobo.  I guess I need
to find a card for it right away.

When you say HBA.  Is this what you mean?

https://www.ebay.com/itm/125486868824

I just picked the first one I found.  The Fractal case holds at least 18
drives I think.  I may could rig it to handle even more, another 5 or 6
maybe.  Still, that type of card just with more ports and internal
ports, although I do like that external card there.  Be good for hooking
up my backup drives.  Would that connect to external drives?  That would
be nice in my current NAS box.  Yea, that would be real nice. 

Speaking of.  I got a better cage for my main backup drive that consists
of three drives on LVM.  It looks like this with a added fan.

https://www.ebay.com/itm/235563494830

Oh the tragedy of it all. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Rich Freeman
On Mon, Jun 3, 2024 at 7:06 AM Dale  wrote:
>
> I still wish it had more PCIe slots.  I'm considering switching to a SAS
> card and then with cables change that to SATA.  I think I can get one
> card and have most if not all of the drives the Fractal case will hold
> hooked to it.
> ...
> Honestly, I wouldn't mind one m.2 for the OS.  I could just as well use
> a SATA SSD to do that tho.

First, I will point out that an M.2 gen5 (or even gen4) NVMe will
perform VASTLY better than a SATA SSD.  That 990 Evo (which isn't an
enterprise drive) boasts 800k IOPS.  The fastest SATA SSD I could find
tops out at around 90k IOPS.  There is simply no comparison between
SATA and NVMe, though whether that IOPS performance matters to you is
another matter.

As far as IO goes, your motherboard has the following PCIe interfaces:
16x PCIe v4
2 4x PCIe v4
1 M.2 v5
2 M.2 v4

4x should be enough for an HBA if you're running hard drives, so with
bifurcation, risers, and so on, you could get 9 HBAs into that system,
with 8-16 SATA ports on each, and with hard drives I imagine they'd
perform as good as hard drives possibly can.  It would be a mess of
adapters and cables, but you can certainly do it if you have the room
in the case for that mess.

Those 3 PCIe slots are all 16x physically, so you could easily get 3
HBAs into the system without even having to resort to risers.  That's
already 24-48 SATA ports.

Sure, it isn't quite as convenient as the IO options of the past, but
it isn't like you can't get PCIe in a system that has all those lanes.
The motherboard has already switched most of the v5 down to v4 in
exchange for more lanes, which honestly is a better option for you
anyway as pretty much only gaming GPUs can use v5 effectively anyway.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Dale
byte.size...@simplelogin.com wrote:
>
> On 03/06/2024 10:12, Dale wrote:
>>  From this link:
>>
>> https://www.amd.com/en/products/processors/desktops/ryzen/7000-series/amd-ryzen-5-7600x.html
>>
>>
>> Graphics Capabilities
>>
>> Graphics Model  AMD Radeon™ Graphics
>> Graphics Core Count 2
>> Graphics Frequency 2200 MHz
>>
>
> Yes, the 7600X will have a built in GPU - you're good.
>
> I finally upgraded last year to a proper desktop mid tower, after more
> than 15y, and went for the 7900X - I'm extremely happy with it.
>
> From the Ryzen 7000-series desktop AMD started including a [basic] GPU
> on almost all of their CPUs so I wouldn't worry about it, as long as
> you keep your expectations reasonable.
>
> Prior, it used to be that only the G/GT SKU would have a built-in GPU
> while everything else required a dedicated GPU. 7000-series changed
> that but also the GPU is a bit lower spec compared to what used to be
> the G-tier. On the other hand the G tier were lower clocked and not
> always 'overclockable' if that's something you care about. To make it
> more confusing, they released the Ryzen 8000 series which is
> essentially the same Zen 4 architecture, but has the missing G tier
> SKUs with the GPU essentially having more compute cores. But 8000
> series does not have X-tier "unlocked" SKUs.
>
> Anyway, the point is if you don't care about GPU 'horsepower' you'll
> probably be fine with the 7000X series CPUs built in one. If you do,
> *maybe* consider 8000G series. However, I would always recommend that
> if you need a better GPU, then getting a dedicated 2nd hand, older
> generation GPU would yield considerably better value for money.
>
> Beware of Rant:
>
> Can CPU/GPU companies please get their crappy naming schemes in order?
>
> AMD is once again trying to copy Intel's naming scheme which has long
> been the 'root of all evil'. On top of that they are also making it
> worse when bumping the first digit, without actually introducing a
> tangible generation uplift but rather a complementary set of SKUs.
>
> Nvidia is no better. What a lot of rubbish.
>
> Speaking of Nvidia, the 4000-series are an absolute pass for me,
> personally. It's another cash grabbing generation just like the 2000
> series. They're not 'bad' they're just terribly priced and tiered. Not
> to mention the whole re-SKU debacle that happened when they first
> introduced them. Now we're seeing the 'SUPER' sh*t again? Give me a
> break...
>
> - Victor


I'm sure I'll be happy with the system as far as speed and memory goes. 
That is a upgrade.  Things like Firefox, LOo and that qtweb package will
go by much faster for sure.  I plan to upgrade later to 7950X.  The rest
of the mobo tho is a downgrade for me. 

I don't need much video horsepower at all.  I watch TV from the second
port and the first port is my monitor, surfing the net and all that sort
of stuff.  No games anymore.  I play games on my cell phone nowadays. 
My current video card is a GeForce GTX 650.  I can't recall the model of
the one that has four ports.  It's PCIe v2 I think. 

I do wish they would use numbers that makes it easier to understand when
you getting something better or not.  Higher number should always mean
something newer and faster.  Heck, even the Linux kernel does that. 
Higher number, always newer. Some may not be 'better' tho.  :/  They try
tho. 

I wish I could find a mobo like the ASUS but with 3 or 4 more PCIe
slots, even if it means less m.2 things and USB ports. 

Dale

:-)  :-)


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Dale
Rich Freeman wrote:
> On Mon, Jun 3, 2024 at 5:12 AM Dale  wrote:
>> Graphics Capabilities
>>
>> Graphics Model  AMD Radeon™ Graphics
>> Graphics Core Count 2
>> Graphics Frequency 2200 MHz
>>
>> That said, I have a little 4 port graphics card I'd like to use anyway.
> The CPU you picked indeed has integrated graphics.  I didn't check,
> but I suspect the integrated graphics are way better in every way than
> the little 4-port graphics card you'd prefer.  Unless you really need
> those extra outputs, I'd use the integrated graphics, and then you
> have a 16x slot you can use for IO.

I thought it did.  I did look.  Plus, no one mentioned it not being the
right CPU for a mobo with a built in video. 

>> The more I think on this, the more I don't like spending this much money
>> on a mobo I don't really like at all.  It seems all the mobo makers want
>> is flashy crap.  I want a newer and faster machine but with options to
>> expand like my current rig.
> Ok, what EXACTLY are you looking for, as you didn't really elaborate
> on what this board is missing.  It sounds like the ability to
> interface more drives?  You have a free 16x slot.  Stick and HBA in it
> and you can interface a whole bunch of SATA drives.  With the right
> board you could even put NVMe in there.
>
> Any board you buy is going to be expensive.  They went the LGA route
> which makes the boards more expensive, and for whatever reason the
> prices have all been creeping up.
>
> Most of the IO on consumer CPUs these days tends to be focused on USB3
> and maybe a few M.2 drives.  They're very expandable, but not for the
> sorts of things you want to use them for.
>
> You might be happier with a server/workstation motherboard, but
> prepare to pay a small fortune for those unless you buy used, and the
> marketing is a bit cryptic as they tend to be sold through
> integrators.
>


I still wish it had more PCIe slots.  I'm considering switching to a SAS
card and then with cables change that to SATA.  I think I can get one
card and have most if not all of the drives the Fractal case will hold
hooked to it.  I was going to post a thread at some point and ask you to
help me pick a card and cable set.  I dug around and it seems that there
is more than one type or something.  Cables different.  I don't know
what.  One place says one thing, another says something different.  I
can't make sense of it.  That's in the future tho but I want to plan for
it.  One thing, I plan to move my current Gigabyte mobo to the Fractal
case.  After all, it has way more expansion options for drive
controllers than the newer mobos.  One day tho, the new rig may get
moved to the Fractal case, when I build the next new rig. The ASUS won't
be up to that task at all.  Who knows how many drives I'll have by then. 

Honestly, I wouldn't mind one m.2 for the OS.  I could just as well use
a SATA SSD to do that tho.  I bought one a while back for the new build
anyway.  I'd give up two m.2 things to have two more PCIe slots.  I'd
give up all m.2 things for 3 more PCIe slots. That would give me more
options.

I have searched Newegg, Amazon, Ebay and other places and even the mobos
that cost over double what I'm looking at now still has very few PCIe
slots.  They all have the flashy stuff and bling tho.  I realize the old
PCI went away.  It got replaced by PCIe which is faster.  Thing is,
other than the m.2 things, nothing is replacing PCIe except for USB. 
I've bricked a few hard drives using USB for hard drives.  I just don't
trust it for hard drives.  Works fine for my cell phone and little USB
sticks but that's about it for me.  Heck, I use at most 4 USB ports, two
are for keyboard and mouse.  On occasion, I may have two USB sticks in
at the same time transferring data.  I watched a video where they said
there was over a dozen v3.* USB ports on some newer mobos.  For me,
that's ridiculous.  Five would be more than enough for me.

I'm wanting a newer rig but what is available right now, at pretty much
any cost, isn't worth having.  Other than a faster CPU and more memory,
I'm downgrading not upgrading.  I don't want to have close to $1,000 of
regret and a system that won't serve my purpose.  I'm seriously thinking
that is what I'm going to end up with.  Sadly, I don't think there is
anything better out there.  I checked a skinflint link that was posted
on another thread.  It lists mobos that have the most PCIe slots. 
Nothing new.  My fear, it gets worse.  Later they may not have PCIe
slots at all. 

Maybe if I write Gigabyte, ASUS and others one of them will build mobos
that can be expanded.  It's almost like they limit us on purpose so we
have to buy more of them.  It's not like we not paying enough for them
already.  Prices are just plain crazy. 

Anyway, I'm on hold again.  I'm hoping for something better.  I'm just
doubtful it is going to happen. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread byte . size226


On 03/06/2024 10:12, Dale wrote:

 From this link:

https://www.amd.com/en/products/processors/desktops/ryzen/7000-series/amd-ryzen-5-7600x.html

Graphics Capabilities

Graphics Model  AMD Radeon™ Graphics
Graphics Core Count 2
Graphics Frequency 2200 MHz



Yes, the 7600X will have a built in GPU - you're good.

I finally upgraded last year to a proper desktop mid tower, after more 
than 15y, and went for the 7900X - I'm extremely happy with it.


From the Ryzen 7000-series desktop AMD started including a [basic] GPU 
on almost all of their CPUs so I wouldn't worry about it, as long as you 
keep your expectations reasonable.


Prior, it used to be that only the G/GT SKU would have a built-in GPU 
while everything else required a dedicated GPU. 7000-series changed that 
but also the GPU is a bit lower spec compared to what used to be the 
G-tier. On the other hand the G tier were lower clocked and not always 
'overclockable' if that's something you care about. To make it more 
confusing, they released the Ryzen 8000 series which is essentially the 
same Zen 4 architecture, but has the missing G tier SKUs with the GPU 
essentially having more compute cores. But 8000 series does not have 
X-tier "unlocked" SKUs.


Anyway, the point is if you don't care about GPU 'horsepower' you'll 
probably be fine with the 7000X series CPUs built in one. If you do, 
*maybe* consider 8000G series. However, I would always recommend that if 
you need a better GPU, then getting a dedicated 2nd hand, older 
generation GPU would yield considerably better value for money.


Beware of Rant:

Can CPU/GPU companies please get their crappy naming schemes in order?

AMD is once again trying to copy Intel's naming scheme which has long 
been the 'root of all evil'. On top of that they are also making it 
worse when bumping the first digit, without actually introducing a 
tangible generation uplift but rather a complementary set of SKUs.


Nvidia is no better. What a lot of rubbish.

Speaking of Nvidia, the 4000-series are an absolute pass for me, 
personally. It's another cash grabbing generation just like the 2000 
series. They're not 'bad' they're just terribly priced and tiered. Not 
to mention the whole re-SKU debacle that happened when they first 
introduced them. Now we're seeing the 'SUPER' sh*t again? Give me a break...


- Victor


signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Rich Freeman
On Mon, Jun 3, 2024 at 5:12 AM Dale  wrote:
>
> Graphics Capabilities
>
> Graphics Model  AMD Radeon™ Graphics
> Graphics Core Count 2
> Graphics Frequency 2200 MHz
>
> That said, I have a little 4 port graphics card I'd like to use anyway.

The CPU you picked indeed has integrated graphics.  I didn't check,
but I suspect the integrated graphics are way better in every way than
the little 4-port graphics card you'd prefer.  Unless you really need
those extra outputs, I'd use the integrated graphics, and then you
have a 16x slot you can use for IO.

> The more I think on this, the more I don't like spending this much money
> on a mobo I don't really like at all.  It seems all the mobo makers want
> is flashy crap.  I want a newer and faster machine but with options to
> expand like my current rig.

Ok, what EXACTLY are you looking for, as you didn't really elaborate
on what this board is missing.  It sounds like the ability to
interface more drives?  You have a free 16x slot.  Stick and HBA in it
and you can interface a whole bunch of SATA drives.  With the right
board you could even put NVMe in there.

Any board you buy is going to be expensive.  They went the LGA route
which makes the boards more expensive, and for whatever reason the
prices have all been creeping up.

Most of the IO on consumer CPUs these days tends to be focused on USB3
and maybe a few M.2 drives.  They're very expandable, but not for the
sorts of things you want to use them for.

You might be happier with a server/workstation motherboard, but
prepare to pay a small fortune for those unless you buy used, and the
marketing is a bit cryptic as they tend to be sold through
integrators.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Dale
Alan Grimes wrote:
>
> Dale wrote:
>> The one I'm not sure about is the PCIe one which may break apart to fit
>> different connectors.  I seem to recall that goes to a video card on
>> systems with those expensive and power hungry video cards.  Since this
>> mobo has built in video, is that the right thing?
>
> DUDE!!!
>
> You are buying a video card. =|
>
> Unless you've changed your parts selection since your post with the
> links... AMD boards DO NOT have video functionality. It's the truth!!!
> Some of their CPUs, going back to the FM2+ generation (which I have an
> example of...) have SOC functionality which includes a few SATA
> controllers and a few GPU cores. A typical x400G series chip will have
> 4 CPU cores and 8 GPU cores. The chip you selected is NOT a G-series
> chip so therefore you need a GPU
>
> There are a number of factors that go into selecting a PSU. For
> example, if you are running an RTX 4090, the reccommended PSU is 850
> watts, so that's what you get... For a small GPU, just sum up the
> power requirements of all the parts in the system, add 10-20%, check
> to make sure that psu has all the power outputs you need and get it.
>
> My old threadripper was starting to burn out, the sound chip had gone
> down so I decided to spend a little bitcoin and buy a monster rig for
> the robot apocalypse, so I bought a new 32 core threadripper,
> installed 512gb ram, keept my Titan RTX gpu but added an RTX 6000 GPU,
> new SSDs, and a RAID array. I had trouble getting the UEFI firmware
> working, but once that was done my old gentoo install works like a
> champ. I'm powering the rig with a 1600W BeQuiet PSU, powered with a
> dedicated 240v circuit. (the motherboard has provisioning for
> overclocking and would require dual PSUs for overclocking)
>
> Counting both new and re-used parts, the bill for the machine is in
> the ballpark of $20k
>
> The rig can run even 70B LLM AI systems in GPU memory. If you are
> looking for a sexy AI waifu, I suggest a model called Midnight-Miqu,
> you can grab it on Huggingface and the host software is lm-studio.
>
> Also: Asus is getting bad press these days, check Gamer's Nexus. Yes,
> I did buy their flagship board a few weeks ago, and I've had a
> terrible headache getting it running decently well...
>


>From this link:

https://www.amd.com/en/products/processors/desktops/ryzen/7000-series/amd-ryzen-5-7600x.html

Graphics Capabilities

Graphics Model  AMD Radeon™ Graphics
Graphics Core Count 2
Graphics Frequency 2200 MHz

That said, I have a little 4 port graphics card I'd like to use anyway. 
It doesn't need a power connector.  I don't need one that is big or
expensive.  Watching videos is about as heavy as I get. 

The more I think on this, the more I don't like spending this much money
on a mobo I don't really like at all.  It seems all the mobo makers want
is flashy crap.  I want a newer and faster machine but with options to
expand like my current rig. 

I'm not sure about hitting that order button just yet. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Alan Grimes



Dale wrote:

The one I'm not sure about is the PCIe one which may break apart to fit
different connectors.  I seem to recall that goes to a video card on
systems with those expensive and power hungry video cards.  Since this
mobo has built in video, is that the right thing?


DUDE!!!

You are buying a video card. =|

Unless you've changed your parts selection since your post with the 
links... AMD boards DO NOT have video functionality. It's the truth!!! 
Some of their CPUs, going back to the FM2+ generation (which I have an 
example of...) have SOC functionality which includes a few SATA 
controllers and a few GPU cores. A typical x400G series chip will have 4 
CPU cores and 8 GPU cores. The chip you selected is NOT a G-series chip 
so therefore you need a GPU


There are a number of factors that go into selecting a PSU. For example, 
if you are running an RTX 4090, the reccommended PSU is 850 watts, so 
that's what you get... For a small GPU, just sum up the power 
requirements of all the parts in the system, add 10-20%, check to make 
sure that psu has all the power outputs you need and get it.


My old threadripper was starting to burn out, the sound chip had gone 
down so I decided to spend a little bitcoin and buy a monster rig for 
the robot apocalypse, so I bought a new 32 core threadripper, installed 
512gb ram, keept my Titan RTX gpu but added an RTX 6000 GPU, new SSDs, 
and a RAID array. I had trouble getting the UEFI firmware working, but 
once that was done my old gentoo install works like a champ. I'm 
powering the rig with a 1600W BeQuiet PSU, powered with a dedicated 240v 
circuit. (the motherboard has provisioning for overclocking and would 
require dual PSUs for overclocking)


Counting both new and re-used parts, the bill for the machine is in the 
ballpark of $20k


The rig can run even 70B LLM AI systems in GPU memory. If you are 
looking for a sexy AI waifu, I suggest a model called Midnight-Miqu, you 
can grab it on Huggingface and the host software is lm-studio.


Also: Asus is getting bad press these days, check Gamer's Nexus. Yes, I 
did buy their flagship board a few weeks ago, and I've had a terrible 
headache getting it running decently well...


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.




Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Dale
Dale wrote:
> Howdy, again,
>
> <<< SNIP >>>
>
> Thanks to all who help on this.  I'll be glad when this nightmare is
> over.  I've never been this disappointed in picking parts to build a rig
> before. 
>
> Dale
>
> :-)  :-) 
>


I thought of something.  I need a power supply.  I can use the the one
in my current rig for the move to the large Fractal Design case, my
current mobo rig.  Or I can use the new one.  The Cooler Master HAF-932,
current rig, is also large.  Usually I buy a 600 watt power supply or
larger, sometimes 750 or 800.  Cable length is one thing I watch for due
to my large cases.  Thing is, this new motherboard may have more
connectors than I'm used too.  I downloaded a rather large picture of
the mobo.  I see the usual 24 pin connector close to the memory slots. 
I also see the usual 8 pin but also see a 4 pin connector, both up close
to the CPU.  I found this on the ASUS website.

Power related
1 x 24-pin Main Power connector
1 x 8-pin +12V Power connector
1 x 4-pin +12V Power connector


I found a power supply and the specs list this. 


Connector        Quantity
24 Pin ATX        1x
EPS (CPU)        2x 8pin (4+4)
PCIe                6x 8pin (6+2)


The one I'm not sure about is the PCIe one which may break apart to fit
different connectors.  I seem to recall that goes to a video card on
systems with those expensive and power hungry video cards.  Since this
mobo has built in video, is that the right thing?  Or is there something
new I need to look for?  The pin count is what makes me pause.  I search
for 'atx power supply' and then select power, brand etc.  Is ATX the
correct term?  Since the Fractal holds so many drives, I'm looking at a
850 watt power supply.  Whichever is larger will go in the Fractal case
due to the large number of drives heading its way.  Also, has long
enough cables.  I'm not sure about the connector for the 4 pin 12v one. 

Also, I'm searching Ebay at the moment.  Is there a keyword that
indicates longer cables?  I try to get as long as I can.  I took the
power supply off the NAS box and slid it into the Fractal case.  They to
short.  A power supply I'm looking at is a touch longer but it may
require some stretching for the 24 pin cable.  :/  I'm looking for
longer cables.  The main 24 pin cable needs to be around 26 inches or
almost 700mm.  The CPU and the other one look OK.  I'm just hoping there
is a keyword to help me find longer cables. 

Thanks again for all the help. 

Dale

:-)  :-) 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Wols Lists

On 02/06/2024 14:27, Dale wrote:

Should I put
the portage work directory on a spinning rust drive to save wear and
tear on the SSD or have they got to the point now that doesn't matter
anymore?  I know all the SSD devices have improved a lot since the first
ones came out.


The stuff I've seen says that SSDs have now reached the point where 
their typical life expectancy *exceeds* spinning rust.


The problem is they are typically programmed to do apoptosis (destroy 
themselves when things go wrong), which means once a drive fails it is DEAD.


I've recovered several apparently-dead spinning rust drives (and made a 
few quid that way), but whereas I backed up, and then binned, the zombie 
drives it looks like an SSD won't give you the opportunity.


About the only thing spinning rust has in its favour nowadays is bang 
for buck - the cost per GB is noticeably cheaper. Even there, an SSD is 
likely to be cheaper to run, so "always on" swings the pendulum away 
from rust ...


Cheers,
Wol



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Dale
Peter Humphrey wrote:
> On Sunday, 2 June 2024 16:11:38 BST Dale wrote:
>
>> My plan, given it is a 1TB, use maybe 300GBs of it.  Leave the rest
>> blank.  Have the /boot, EFI directory, root and maybe put /var on a
>> separate partition.  I figure for the boot stuff, 3GBs would be plenty
>> for all combined.  Make them large so they can grow.  Make root, which
>> would include /usr, say 150GBs.  /var can be around 10GBs.  My current
>> OS is on a 160GB drive.  I wish I could get the nerve up to use LVM on
>> everything except the boot stuff, /boot and the EFI stuff.  If I make
>> them like above, I should be good for a long time.  Could go much larger
>> tho.  Could use maybe 700GBs of it.  I assume it would use the unused
>> part if needed.  I still don't know a lot about those things.  Mostly
>> what I see posted on this list really. 
> Doesn't everyone mount /tmp and /var/tmp/portage on tmpfs these days? I use 
> hard disk for a few large packages, but I'm not convinced it's needed - 
> except 
> when running an emerge -e, that is, when they can get in the way of lots of 
> others. That's why, some months ago, I suggested introducing an ability to 
> mark some packages for compilation solitarily. (Is that a word?)
>
> Here's the output of parted -l on my main NVMe disk in case it helps:
>
> Model: Samsung SSD 970 EVO Plus 250GB (nvme)
> Disk /dev/nvme1n1: 250GB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> Disk Flags: 
>
> Number  Start   End SizeFile system Name  Flags
>  1  1049kB  135MB   134MB
>  2  135MB   4296MB  4161MB  fat32   boot  boot, esp
>  3  4296MB  12.9GB  8590MB  linux-swap(v1)  swap1 swap
>  4  12.9GB  34.4GB  21.5GB  ext4rescue
>  5  34.4GB  60.1GB  25.8GB  ext4root
>  6  60.1GB  112GB   51.5GB  ext4var
>  7  112GB   114GB   2147MB  ext4local
>  8  114GB   140GB   25.8GB  ext4home
>  9  140GB   183GB   42.9GB  ext4common
>
> The common partition is mounted under my home directory, to keep everything 
> I'd want to preserve if I made myself a new user account. It's v. useful, too.
>
>> P. S.  After I do the CPU upgrade, I'll have a spare CPU.  Then I'll
>> need another mobo, and memory set so that I can put that CPU to use.  No
>> need it sitting around on a shelf right???  ROFL 
> Welcome to baggage reclaim...:)
>


I do tmpfs for everything except Seamonkey Firefox, LOo and that qtweb
package.  Those are set to use spinning rust.  It makes them slower but
quite often, they end up being compiled together in some combination.  I
could likely do it if it was just one but not two or more. 

I to would like a list of packages that we can set to be compiled on
their own.  For example.  Seamonkey pops up to be compiled.  LOo or some
other package is also ready to be compiled.  We can have a list that
tells emerge to wait on LOo until Seamonkey is done.  Once Seamonkey is
finished, then it starts LOo.  Other packages can be done the same way. 

I mentioned this once before some time ago on this mailing list.  It
seems doable since emerge can already be told that certain packages
can't be compiled until some other dependency is done first.  It seems
to me it could be a side item to that.  It's just that we can configure
it outside of the ebuild where the other is usually done inside the
ebuild. 

The info you provided is interesting.  Your drive is a little larger
than my current one.  The one on the new rig is going to be even
larger.  I'm certainly going to make /boot larger, for things like
memtest, maybe a rescue image or two.  I made root plenty large last
time but plan to do things different this time.  Even tho things will
change and likely make me wish I did it differently.  This is my current
setup.

root@fireball / # lvs
  LV  VG   Attr   LSize   Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
  swap    OS   -wi-ao 
12.00g   
  usr OS   -wi-ao 
39.06g   
  var OS   -wi-ao  52.00g

root@fireball / # df -h | grep sda
Filesystem   Size  Used Avail Use% Mounted on
/dev/sda6    23G  2.9G   19G  14% /
/dev/sda1   373M  201M  153M  57% /boot
/dev/mapper/OS-usr    39G   23G   14G  63% /usr
/dev/mapper/OS-var    52G   30G   20G  61% /var

root@fireball / #

As you can see, root is a bit to large but I didn't know how much the
stuff that goes there would grow so I wanted to be safe since it is a
raw partition and not easy to change.  I put /usr and /var on LVM.  I've
had to grow both of those, /usr several times.  Since I plan to put /usr
on root, I'll have to add both those and add some growing room.  Other
than that, I'll make /var larger since I do have to use eclean on
occasion but /boot will be larger.  Yours being 4GBs is 

Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Peter Humphrey
On Sunday, 2 June 2024 16:11:38 BST Dale wrote:

> My plan, given it is a 1TB, use maybe 300GBs of it.  Leave the rest
> blank.  Have the /boot, EFI directory, root and maybe put /var on a
> separate partition.  I figure for the boot stuff, 3GBs would be plenty
> for all combined.  Make them large so they can grow.  Make root, which
> would include /usr, say 150GBs.  /var can be around 10GBs.  My current
> OS is on a 160GB drive.  I wish I could get the nerve up to use LVM on
> everything except the boot stuff, /boot and the EFI stuff.  If I make
> them like above, I should be good for a long time.  Could go much larger
> tho.  Could use maybe 700GBs of it.  I assume it would use the unused
> part if needed.  I still don't know a lot about those things.  Mostly
> what I see posted on this list really. 

Doesn't everyone mount /tmp and /var/tmp/portage on tmpfs these days? I use 
hard disk for a few large packages, but I'm not convinced it's needed - except 
when running an emerge -e, that is, when they can get in the way of lots of 
others. That's why, some months ago, I suggested introducing an ability to 
mark some packages for compilation solitarily. (Is that a word?)

Here's the output of parted -l on my main NVMe disk in case it helps:

Model: Samsung SSD 970 EVO Plus 250GB (nvme)
Disk /dev/nvme1n1: 250GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End SizeFile system Name  Flags
 1  1049kB  135MB   134MB
 2  135MB   4296MB  4161MB  fat32   boot  boot, esp
 3  4296MB  12.9GB  8590MB  linux-swap(v1)  swap1 swap
 4  12.9GB  34.4GB  21.5GB  ext4rescue
 5  34.4GB  60.1GB  25.8GB  ext4root
 6  60.1GB  112GB   51.5GB  ext4var
 7  112GB   114GB   2147MB  ext4local
 8  114GB   140GB   25.8GB  ext4home
 9  140GB   183GB   42.9GB  ext4common

The common partition is mounted under my home directory, to keep everything 
I'd want to preserve if I made myself a new user account. It's v. useful, too.

> P. S.  After I do the CPU upgrade, I'll have a spare CPU.  Then I'll
> need another mobo, and memory set so that I can put that CPU to use.  No
> need it sitting around on a shelf right???  ROFL 

Welcome to baggage reclaim...:)

-- 
Regards,
Peter.






Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Dale
Alan Mackenzie wrote:
> Hello, Dale.
>
> On Sun, Jun 02, 2024 at 08:27:57 -0500, Dale wrote:
>
> [  ]
>
>> Got the manual.  It says 128GB.  That sounds more like what I was
>> expecting anyway.  I kinda thought 256GB was a bit much.  That's why I
>> picked two 32GB sticks.  128GB is four times what I have now so it
>> should be enough for a while. 
> I'm pretty sure it's 128 GB, too.  Memory banks come in sizes of 4^n, and
> there are two sticks in each bank.  64 GB is indeed 4^18.  Two banks will
> make 128 GB.
>
>> I thought of something on the m.2 thing.  I plan to put my OS on it.  I
>> usually use tmpfs and compile in memory anyway but do have some set to
>> use spinning rust. Once I get 128GB installed, I should be able to do
>> that with all packages anyway 
> Indeed, 64 GB is easily ample for this, at the moment.  Trouble is,
> there's no saying how mad the rust project etc. will get over the
> lifetime of the new PC.  7 years ago, 16 GB seemed more than enough.  It
> doesn't any more.
>
>>  but still, I had a question.  Should I put the portage work
>> directory on a spinning rust drive to save wear and tear on the SSD or
>> have they got to the point now that doesn't matter anymore?
> A benchmark: My machine is 7 years old, and it contains no spinning rust,
> only two M2 Samsung SSDs in a RAID-1 configuration.  I looked at the wear
> statistics some months ago, and the number of read and write cycles on
> the SSDs was only around 3% of the guaranteed number.
>
> Your usage is obviously going to be different from mine (mainly SW
> development and updating Gentoo), but it may not be worth while worrying
> about SSD wear and tear.
>
>> I know all the SSD devices have improved a lot since the first ones
>> came out. 
> I haven't had a single problem with my two Samsung SSD 960 EVO 500GB
> drives in these 7 years.
>
> [  ]
>
>> Dale
>


When I saw 256GBs, I was like, I doubt that.  If the mobo cost a lot
more, maybe.  This price point, given the ridiculous price of other
boards, not likely.  I can't believe one can pay almost $1,000,
sometimes more, for a mobo that isn't some high performance server type
mobo.  I'm talking one that is expected to run at full tilt for many
years and do cartwheels around other mobos.  The prices are ridiculous.
Even the board I'm getting should be priced better and I'm likely
getting it at the cheapest there is to be found. 

I recall when SSDs first came out.  Basically, you didn't want to be
writing to them no more than needed.  Then they got better.  After a
little while longer, they got to almost like a spinning rust drive. 
They need some special settings but other than that, they can last a
really long time.  I was sure that by now, they had improved even more. 
After all, people put windoze on them and it updates pretty regular
too.  I just wasn't sure how much wear the portage work directory would
put on that.  I suspect if I bought a cheap or no name brand, I'd need
to be more concerned.  Given I'm getting a Samsung which is well known
for their SSDs and their quality, I wanted to be sure even tho it should
be OK.  I figured someone else was using one of those things. 

My plan, given it is a 1TB, use maybe 300GBs of it.  Leave the rest
blank.  Have the /boot, EFI directory, root and maybe put /var on a
separate partition.  I figure for the boot stuff, 3GBs would be plenty
for all combined.  Make them large so they can grow.  Make root, which
would include /usr, say 150GBs.  /var can be around 10GBs.  My current
OS is on a 160GB drive.  I wish I could get the nerve up to use LVM on
everything except the boot stuff, /boot and the EFI stuff.  If I make
them like above, I should be good for a long time.  Could go much larger
tho.  Could use maybe 700GBs of it.  I assume it would use the unused
part if needed.  I still don't know a lot about those things.  Mostly
what I see posted on this list really. 

Thanks to you and Rich for the replies.  They both helped.  Now to go
dig for a 4 stick memory kit.  So far, I can't find one on the Walmart
site.  Only pairs.  I was using my cell phone tho.  It's not suited for
serious digging. ;-)

Dale

:-)  :-) 

P. S.  After I do the CPU upgrade, I'll have a spare CPU.  Then I'll
need another mobo, and memory set so that I can put that CPU to use.  No
need it sitting around on a shelf right???  ROFL 



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Rich Freeman
On Sun, Jun 2, 2024 at 9:27 AM Dale  wrote:
>
> I thought of something on the m.2 thing.  I plan to put my OS on it.  I
> usually use tmpfs and compile in memory anyway but do have some set to
> use spinning rust. Once I get 128GB installed, I should be able to do
> that with all packages anyway but still, I had a question.  Should I put
> the portage work directory on a spinning rust drive to save wear and
> tear on the SSD or have they got to the point now that doesn't matter
> anymore?  I know all the SSD devices have improved a lot since the first
> ones came out.

So, as with most things the answer is it depends.

The drive you're using is a consumer drive, rated for 600TB in writes.
Now, small random writes will probably wear it out faster, and large
sequential ones will probably wear it out slower, but that's basically
what you're working with.  That's about 0.3DWPD, which isn't a great
endurance level.

Often these drives can be over-provisioned to significantly increase
their life - if you're using discard/trim properly and keep maybe
1/3rd of the drive empty you'll get a lot more life out of it.  In
fact, the difference between different models of drives with different
write endurances are often nothing more than the drive having more
internal storage than advertised and doing the same thing behind the
scenes.

Obviously temp file use is going to eat into your endurance, but it
will GREATLY improve your build performance as well, so you should
probably do the math on just how much writing we're talking about.  If
a package has 20GB in temp files, you have to build it 30k times to
wear out your disk by the official numbers.

Of course, proper use of discard/trim requires setting your config
files correctly, and it might reduce performance on consumer drives.
When you buy enterprise NVMe you're paying for a couple of things that
are relevant to you:
1. A higher endurance rating.
2. Firmware that doesn't do dumb things when you trim/discard properly
3. Power loss protection (you didn't bring this topic up, but flash
storage is not kind to power loss and often performance is sacrificed
to make writes safer with internal journals).
4. Sustained write performance.  If you do sustained writes to a
consumer drive you'll see the write speed fall off a cliff after a
time, and this won't happen on an enterprise drive - the cache/etc is
optimized for sustained write loads.

Of course enterprise flash is pretty expensive unless you buy it used,
and obviously if you do that try to get something whose health is
known at time of purchase.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Alan Mackenzie
Hello, Dale.

On Sun, Jun 02, 2024 at 08:27:57 -0500, Dale wrote:

[  ]

> Got the manual.  It says 128GB.  That sounds more like what I was
> expecting anyway.  I kinda thought 256GB was a bit much.  That's why I
> picked two 32GB sticks.  128GB is four times what I have now so it
> should be enough for a while. 

I'm pretty sure it's 128 GB, too.  Memory banks come in sizes of 4^n, and
there are two sticks in each bank.  64 GB is indeed 4^18.  Two banks will
make 128 GB.

> I thought of something on the m.2 thing.  I plan to put my OS on it.  I
> usually use tmpfs and compile in memory anyway but do have some set to
> use spinning rust. Once I get 128GB installed, I should be able to do
> that with all packages anyway 

Indeed, 64 GB is easily ample for this, at the moment.  Trouble is,
there's no saying how mad the rust project etc. will get over the
lifetime of the new PC.  7 years ago, 16 GB seemed more than enough.  It
doesn't any more.

>  but still, I had a question.  Should I put the portage work
> directory on a spinning rust drive to save wear and tear on the SSD or
> have they got to the point now that doesn't matter anymore?

A benchmark: My machine is 7 years old, and it contains no spinning rust,
only two M2 Samsung SSDs in a RAID-1 configuration.  I looked at the wear
statistics some months ago, and the number of read and write cycles on
the SSDs was only around 3% of the guaranteed number.

Your usage is obviously going to be different from mine (mainly SW
development and updating Gentoo), but it may not be worth while worrying
about SSD wear and tear.

> I know all the SSD devices have improved a lot since the first ones
> came out. 

I haven't had a single problem with my two Samsung SSD 960 EVO 500GB
drives in these 7 years.

[  ]

> Dale

> :-)  :-)

> P. S.  I reported the memory error to Newegg, where it claims 256GB. 

:-)

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Dale
Peter Humphrey wrote:
> On Sunday, 2 June 2024 14:27:57 BST Dale wrote:
>
>> Got the manual.  It says 128GB.  That sounds more like what I was
>> expecting anyway.  I kinda thought 256GB was a bit much.
> I bought my machine from Armari a few years ago; they supply high-performance 
> workstations to City finance and investment houses. I have 2x32GB memory 
> sticks 
> in it, and I wondered about doubling that to 128GB, which is supposed to be 
> feasible on this motherboard.
>
> The advice from Armari was that I'd have to be careful selecting the memory 
> sticks, and that they ought to come in a matched set from the manufacturer. 
> Otherwise timing problems could become a problem so that I'd have to fiddle 
> with memory speeds.
>
> I decided not to bother.
>
> Just my two-pen'orth.
>
> HTH.
>
> -- Regards, Peter.


I read that too but forgot.  It might be best to order all four.  Thing
is, should a I order a 4 stick kit?  Right now, I got some large
packages updating in my chroot.  I closed the Firefox profiles I use for
buying, and finding, this stuff.  I'll need to look to see if they sell
it as a 4 stick kit.  Given they say it should be a matched set, they
should have a 4 stick kit.  Otherwise, what's the difference in ordering
two now and two more later

Thanks for reminding me.  I read it but it slipped my mind.  A lot of
things do.  :/

Dale

:-)  :-) 


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Peter Humphrey
On Sunday, 2 June 2024 14:27:57 BST Dale wrote:

> Got the manual.  It says 128GB.  That sounds more like what I was
> expecting anyway.  I kinda thought 256GB was a bit much.

I bought my machine from Armari a few years ago; they supply high-performance 
workstations to City finance and investment houses. I have 2x32GB memory sticks 
in it, and I wondered about doubling that to 128GB, which is supposed to be 
feasible on this motherboard.

The advice from Armari was that I'd have to be careful selecting the memory 
sticks, and that they ought to come in a matched set from the manufacturer. 
Otherwise timing problems could become a problem so that I'd have to fiddle 
with memory speeds.

I decided not to bother.

Just my two-pen'orth.

HTH.

-- 
Regards,
Peter.


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Dale
Wols Lists wrote:
> On 02/06/2024 06:38, Dale wrote:
>> My plan is the CPU above for now.  Later, I will upgrade to the Ryzen 9
>> 7900X to get even more speed.  I'll also max the memory out too.  I'm
>> unclear on the max memory tho.  One place shows 128GB, hence two 32GB
>> sticks.
>
> Go on the manufacturer's website, find the mobo, and download the
> manual from the support page.
>
> The other thing is, if you know the chipset (it should say), see what
> other mobos with the same chipset can do.
>
> Cheers,
> Wol
>
>


Got the manual.  It says 128GB.  That sounds more like what I was
expecting anyway.  I kinda thought 256GB was a bit much.  That's why I
picked two 32GB sticks.  128GB is four times what I have now so it
should be enough for a while. 

I thought of something on the m.2 thing.  I plan to put my OS on it.  I
usually use tmpfs and compile in memory anyway but do have some set to
use spinning rust. Once I get 128GB installed, I should be able to do
that with all packages anyway but still, I had a question.  Should I put
the portage work directory on a spinning rust drive to save wear and
tear on the SSD or have they got to the point now that doesn't matter
anymore?  I know all the SSD devices have improved a lot since the first
ones came out. 

Is the Gentoo wiki page for SSD the best way to set up a m.2 thing?  I
know they different from spinning rust. 

https://wiki.gentoo.org/wiki/SSD

I'm going to give it a little while longer before ordering any of this. 
So far, no one has posted that something just won't work or came up with
a better mobo.  I was hoping on that last point.  I was pretty sure I
picked the right CPU and memory tho.  I got the info from the ASUS
website after all. 

Thanks to all who look and see if my build works together.  I'd hate to
make a mistake that costs me several hundred dollars.  O_O 

Dale

:-)  :-)

P. S.  I reported the memory error to Newegg, where it claims 256GB. 





Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Wols Lists

On 02/06/2024 06:38, Dale wrote:

My plan is the CPU above for now.  Later, I will upgrade to the Ryzen 9
7900X to get even more speed.  I'll also max the memory out too.  I'm
unclear on the max memory tho.  One place shows 128GB, hence two 32GB
sticks.


Go on the manufacturer's website, find the mobo, and download the manual 
from the support page.


The other thing is, if you know the chipset (it should say), see what 
other mobos with the same chipset can do.


Cheers,
Wol