On 2015-05-13 19:42, na...@cdl.asgaard.org wrote:
Greetings,
Do we really need them to be swappable at that point? The reason we
swap HDD's (if we do) is because they are rotational, and mechanical
things break.
Right.
Do we swap CPUs and memory hot?
Nope. Usually just toss the whole thin
Greetings,
Do we really need them to be swappable at that point? The reason we
swap HDD's (if we do) is because they are rotational, and mechanical
things break. Do we swap CPUs and memory hot? Do we even replace
memory on a server that's gone bad, or just pull the whole thing during
the
On 05/11/2015 06:50 PM, Brandon Martin wrote:
8kW/rack is something it seems many a typical computing oriented
datacenter would be used to dealing with, no? Formfactor within the
rack is just a little different which may complicate how you can
deliver the cooling - might need unusually force
To some extent people are comparing apples (not TM) and oranges.
Are you trying to maximize the number of total cores or the number of
total computes? They're not the same.
It depends on the job mix you expect.
For example a map-reduce kind of problem, search of a massive
database, probably is
Here's someone's comparison between the B and B+ in terms of power:
http://raspi.tv/2014/how-much-less-power-does-the-raspberry-pi-b-use-than-the-old-model-b
On Mon, May 11, 2015 at 10:25 PM, Joel Maslak wrote:
> Rather then guessing on power consumption, I measured it.
>
> I took a Pi (Model B
Rather then guessing on power consumption, I measured it.
I took a Pi (Model B - but I suspect B+ and the new version is relatively
similar in power draw with the same peripherials), hooked it up to a lab
power supply, and took a current measurement. My pi has a Sandisk SD card
and a Sandisk USB
Maybe I messed up the math in my head, my line of thought was one pi is
estimated to use 1.2 watts, whereas the nuc is at around 65 watts. 10 pi's
= 12 watts. My comparison was 65watts/12watts = 5.4 times more power than
10 pi's put together. This is really a rough estimate because I got the
NUC's
On 05/11/2015 06:21 PM, Randy Carpenter wrote:
That is .8-1.6A at 5v DC. A far cry from 120V AC. We're talking ~5W versus
~120W each.
Granted there is some conversion overhead, but worst case you are probably
talking about 1/20th the power you describe.
His estimates seem to consider that i
On Mon, 2015-05-11 at 14:36 -0700, Peter Baldridge wrote:
> I don't know how to do the math for the 'vat of oil scenario'. It's
> not something I've ever wanted to work with.
It's pretty interesting what you can do with immersion cooling. I work
with it at $DAYJOB. Similar to air cooling, but y
On Mon, May 11, 2015 at 3:21 PM, Randy Carpenter wrote:
>
> That is .8-1.6A at 5v DC. A far cry from 120V AC. We're talking ~5W versus
> ~120W each.
>
> Granted there is some conversion overhead, but worst case you are probably
> talking about 1/20th the power you describe.
Yeah, missed that.
Did I miss anything? Just a quick comparison.
If those numbers are accurate, then it leans towards the NUC rather than
the Pi, no?
Perf: 1x i5 NUC = 10x Pi
$$: 1x i5 NUC = 10x Pi
Power: 1x i5 NUC = 5x Pi
So...if a single NUC gives you the performance of 10x Pis at the capital
cost of
- On May 11, 2015, at 5:36 PM, Peter Baldridge petebaldri...@gmail.com
wrote:
Pi dimensions:
3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi
>
> You butt up against major power/heat issues here in a
Interesting! Knowing a pi costs approximately $35, then you need
approximately $350 to get near an i5.. The smallest and cheapest desktop
you can get that would have similar power is the Intel NUC with an i5 that
goes for approximately $350. Power consumption of a NUC is about 5x that of
the raspbe
As it turns out, I've been playing around benchmarking things lately
using the tried and true
UnixBench suite and here are a few numbers that might put this in some
perspective:
1) My new Rapsberry pi (4 cores, arm): 406
2) My home i5-like thing (asus 4 cores, 16gb's from last year): 3857
3) AW
>>> Pi dimensions:
>>>
>>> 3.37 l (5 front to back)
>>> 2.21 w (6 wide)
>>> 0.83 h
>>> 25 per U (rounding down for Ethernet cable space etc) = 825 pi
You butt up against major power/heat issues here in a single rack, not
that it's impossible. From what I could find the rPi2 requires .5A
min. The
On Mon, May 11, 2015 at 1:37 PM, Clay Fiske wrote:
>
>> On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:
>>
>> Pi dimensions:
>>
>> 3.37 l (5 front to back)
>> 2.21 w (6 wide)
>> 0.83 h
>> 25 per U (rounding down for Ethernet cable space etc) = 825 pi
The parallella board is about the same
> On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:
>
> Pi dimensions:
>
> 3.37 l (5 front to back)
> 2.21 w (6 wide)
> 0.83 h
> 25 per U (rounding down for Ethernet cable space etc) = 825 pi
>
> Cable management and heat would probably kill this before it ever reached
> completion, but l
At least some vendors are already doing that. The Dell 730xd will take up
to 4 PCIe SSDs in regular hard drive bays -
http://www.dell.com/us/business/p/poweredge-r730xd/pd
Nick
On Sat, May 9, 2015 at 3:26 PM, Eugeniu Patrascu wrote:
> On Sat, May 9, 2015 at 9:55 PM, Barry Shein wrote:
>
> >
>
On Sat, May 9, 2015 at 11:55 AM, Barry Shein wrote:
>
> On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
> >
> >
> > So I just crunched the numbers. How many pies could I cram in a rack?
>
> For another list I just estimated how many M.2 SSD modules one could
> cram into a
On Sat, May 9, 2015 at 9:55 PM, Barry Shein wrote:
>
> On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
> >
> >
> > So I just crunched the numbers. How many pies could I cram in a rack?
>
> For another list I just estimated how many M.2 SSD modules one could
> cram into a
On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
>
>
> So I just crunched the numbers. How many pies could I cram in a rack?
For another list I just estimated how many M.2 SSD modules one could
cram into a 3.5" disk case. Around 40 w/ some room to spare (assuming
heat a
>From the work that I've done in the past with clusters, your need for
bandwidth is usually not the biggest issue. When you work with "big data",
let's say 500 million data points, most mathematicians would condense it
all down into averages, standard deviations, probabilities, etc, which then
beco
The problem is, I can get more processing power and RAM out of two 10RU blade
chassis and only needing 64 10G ports...
32 x 256GB RAM per blade = 8.1TB
32 x 16 cores x 2.4GHz = 1,228GHz
(not based on current highest possible, just using reasonable specs)
Needing only 4 QFX5100s which will cost l
So I just crunched the numbers. How many pies could I cram in a rack?
Check my numbers?
48U rack budget
6513 15U (48-15) = 33U remaining for pie
6513 max of 576 copper ports
Pi dimensions:
3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) =
24 matches
Mail list logo