On 05/04/2021 12:14, Christoph K. wrote:
> Hi folks,
>
> I was wondering if I can build a cluster to convert / transcode videos
> with ffmpeg.
>
> There are some workstations standing around here ... and I though maybe
> it's possible to combine their computing power?
&g
Christoph K. wrote:
> Hi folks,
>
> I was wondering if I can build a cluster to convert / transcode videos
> with ffmpeg.
>
> There are some workstations standing around here ... and I though maybe
> it's possible to combine their computing power?
>
> To be
Hi folks,
I was wondering if I can build a cluster to convert / transcode videos
with ffmpeg.
There are some workstations standing around here ... and I though maybe
it's possible to combine their computing power?
To be clear: The task is to work on a single video as fast as possible.
I
hello,
in LVM whith partman, i would like to create 2 VG for 2 cluster of disk.
how does it work?
i don't have succeed with my preseed.cfg recipe.
however, it's possible while manual install with an iso.
best regards
thanks
jeb
Roman Serbski wrote:
> On Tue, Sep 19, 2017 at 11:25 AM, Roman Serbski
> wrote:
>> On Mon, Sep 18, 2017 at 6:18 PM, Sven Hartge wrote:
>>> Roman Serbski wrote:
>>>
>>> Maybe o2cb and ocfs2 need to be ordered after network-online.target?
>>
>> I thought about it too, but according to /etc/init.
On Tue, Sep 19, 2017 at 11:25 AM, Roman Serbski
wrote:
> On Mon, Sep 18, 2017 at 6:18 PM, Sven Hartge wrote:
>> Roman Serbski wrote:
>>
>> Maybe o2cb and ocfs2 need to be ordered after network-online.target?
>
> I thought about it too, but according to /etc/init.d/o2cb and
> /etc/init.d/ocfs2 th
uot;_netdev"?
Thank you. Tried that -- no difference.
>> The status of both o2cb and ocfs2 services is inactive (dead) with the
>> ocfs2 cluster offline:
>
>> $ service o2cb status
>> o2cb.service - LSB: Load O2CB cluster services at system boot.
>>Loaded
Roman Serbski wrote:
> /etc/fstab
> ###
> /dev/drbd0 /var/www ocfs2 noauto,noatime 0 0
> ###
> After the reboot, no /var/www is mounted.
Missing "_netdev"?
> The status of both o2cb and ocfs2 services is inactive (dead) with
Hi,
Anyone here using ocfs2 cluster with Stretch? I can't get it to be
online during boot time for some reason, hence ocfs2 partition can't
be mounted. This is an upgrade from Jessie, where everything was
working just fine. My setup consists of two nodes.
$ uname -a
Linux QSRV01 4.9.0-
Hi,
On 2 August 2015 at 16:18, Leslie Rhorer wrote:
> I need a little (or maybe more than a little) advice and guidance on
> setting up a High Availablity cluster on some Debian machines. I've read
> through the man pages and the config files, but I'm falling short
A few comments, below:
Leslie Rhorer wrote:
I need a little (or maybe more than a little) advice and guidance on setting up
a High Availablity cluster on some Debian machines. I've read through the man
pages and the config files, but I'm falling short of understanding everything
gt; >
> > authkeys:
> > auth 2
> > 2 sha1 HI!
>
> Hi !
>
> This is a config file for heartbeat v1, not pacemaker...
I know that. It's what comes with Debian Wheezy and Jessie.
> Pacemaker is database driven (successor of heartbeat v2).
I wasn't aware
Le 02/08/2015 23:18, Leslie Rhorer a écrit :
[...]
ha.cf:
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
auto_failback on
nodeRAID-Server
nodeBackup
nodeThermostat
ping192.168.1.117
ping192.168.1.118
respawn haclust
I need a little (or maybe more than a little) advice and guidance on setting up
a High Availablity cluster on some Debian machines. I've read through the man
pages and the config files, but I'm falling short of understanding everything I
need to do. I am still in the process of obt
: System hangs on Debian 7.6 and 7.8 when enabling Cluster on die
(COD) with Intel Haswell and Broadwell processors
On Tue, 09 Jun 2015, Dan Ritter wrote:
> On Tue, Jun 09, 2015 at 11:06:30AM +, vincent...@mic.com.tw wrote:
> > Hi Debian-user,
> >
> > I got an issue – “Syste
On Wed, 10 Jun 2015, Sven Arvidsson wrote:
> On Tue, 2015-06-09 at 11:06 +, vincent...@mic.com.tw wrote:
> > I got an issue – “System hangs on Debian 7.6 and 7.8 when enabling
> > Cluster on die (COD) with Intel Haswell and Broadwell processors”.
> > Does Debian 7.6 suppor
On Tue, 09 Jun 2015, Dan Ritter wrote:
> On Tue, Jun 09, 2015 at 11:06:30AM +, vincent...@mic.com.tw wrote:
> > Hi Debian-user,
> >
> > I got an issue – “System hangs on Debian 7.6 and 7.8 when enabling
> > Cluster on die (COD) with Intel Haswell and Broadwell pr
On Tue, 2015-06-09 at 11:06 +, vincent...@mic.com.tw wrote:
> Hi Debian-user,
>
> I got an issue – “System hangs on Debian 7.6 and 7.8 when enabling
> Cluster on die (COD) with Intel Haswell and Broadwell processors”.
> Does Debian 7.6 support Cluster on Die (COD) fea
On Tue, Jun 09, 2015 at 11:06:30AM +, vincent...@mic.com.tw wrote:
> Hi Debian-user,
>
> I got an issue – “System hangs on Debian 7.6 and 7.8 when enabling Cluster on
> die (COD) with Intel Haswell and Broadwell processors”.
> Does Debian 7.6 support Cluster on Die (COD) fea
Hi Debian-user,
I got an issue – “System hangs on Debian 7.6 and 7.8 when enabling Cluster on
die (COD) with Intel Haswell and Broadwell processors”.
Does Debian 7.6 support Cluster on Die (COD) feature with Intel processors?
Vincent Du
Firmware Design Dept.
vincent...@mic.com.tw<mailto:vinc
On Fri, Mar 28, 2014 at 09:29:23AM +0100, basti wrote:
> SRV1 -> Node 1
> client -->|(shared IP) \/
> | /\
> SRV2 -> Node 2
> :
> Node n
>
> Can
rd but I think there are just
proxy's they donest need guaranteed I/O and work in most cased with the
RAM or Cache.
On 27.03.2014 18:21, Denis Witt wrote:
> On Thu, 27 Mar 2014 16:47:15 +0100
> basti wrote:
>
>> perhaps that's a bit off topic here but can someone explain wha
Le 27/03/2014 18:21, Denis Witt a écrit :
On Thu, 27 Mar 2014 16:47:15 +0100
basti wrote:
perhaps that's a bit off topic here but can someone explain what I
need to build a hardware failover nginx cluster?
I a nutshell:
* (at least) two Servers
* a monitoring software
* a shar
On Thu, 27 Mar 2014 16:47:15 +0100
basti wrote:
> perhaps that's a bit off topic here but can someone explain what I
> need to build a hardware failover nginx cluster?
I a nutshell:
* (at least) two Servers
* a monitoring software
* a shared IP
* something that will switch
On Thu, Mar 27, 2014 at 04:47:15PM +0100, basti wrote:
> Hello,
> perhaps that's a bit off topic here but can someone explain what I need
> to build a hardware failover nginx cluster?
>
> The unclear is:
> How does the client know that the server 1 is down and use the ot
Hi,
> perhaps that's a bit off topic here but can someone explain what I need to
> build a hardware failover nginx cluster?
Nope, but define hardware. A loadbalancer does not care what OS you are
running, it does care what service it needs to balance. So, would "a debian
Hello,
perhaps that's a bit off topic here but can someone explain what I need
to build a hardware failover nginx cluster?
The unclear is:
How does the client know that the server 1 is down and use the other one?
DNS failover has a delay because of the design I think.
client ---&
2013-11-27 11:19 keltezéssel, basti írta:
> I plan to setup an Active/Active HA Webserver with 2 VPS.
> I read something about Heartbeat/Pacemaker and HAProxy but what do I
> need? What is overkilled?
Heartbeat is not for Active/Active, it is typically for active/passive setup
> And is it possib
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/27/2013 11:19 AM, basti wrote:
> I plan to setup an Active/Active HA Webserver with 2 VPS. I read
> something about Heartbeat/Pacemaker and HAProxy but what do I need?
> What is overkilled? And is it possible to setup this with only 1
> public I
Hi
On Wed, Nov 27, 2013 at 11:19:15AM +0100, basti wrote:
> Hello,
> I plan to setup an Active/Active HA Webserver with 2 VPS.
> I read something about Heartbeat/Pacemaker and HAProxy but what do I
> need? What is overkilled?
heartbeat/pacemaker are good for making an IP address "float" between
s
Hello,
I plan to setup an Active/Active HA Webserver with 2 VPS.
I read something about Heartbeat/Pacemaker and HAProxy but what do I
need? What is overkilled?
And is it possible to setup this with only 1 public IP per Server?
Is there an Tutorial somewhere?
Regards,
Basti
--
To UNSUBSCRIBE, em
On 19 August 2012 02:32, Stan Hoeppner wrote:
> On 8/18/2012 6:36 AM, Mauro wrote:
>
>> I've upgraded ram from 32 to 64G.
>
> Did the reboots occur before doing this?
>
>> I've reinstalled all simms.
>
> DIMMs. SIMMs haven't been used for over a decade. But the fact you
> mentioned SIMMs tells m
On 8/18/2012 6:36 AM, Mauro wrote:
> I've upgraded ram from 32 to 64G.
Did the reboots occur before doing this?
> I've reinstalled all simms.
DIMMs. SIMMs haven't been used for over a decade. But the fact you
mentioned SIMMs tells me you've been at this game a while.
> The bios reports no ra
On 17 August 2012 09:53, Stan Hoeppner wrote:
> I'd be thoroughly inspecting the power circuits feeding those servers at
> this point. Do you have the machines set to automatically power back on
> after power loss? If you do, switch that mode so they stay off after AC
> power loss. That shoul
On 8/17/2012 1:52 AM, Mauro wrote:
> On 14 August 2012 08:24, Mauro wrote:
>> On 13 August 2012 22:58, Stan Hoeppner wrote:
>>
>>> That being the case I'd suspect something other than server hardware.
>>> To be sure, manually remove one node from the clus
On 14 August 2012 08:24, Mauro wrote:
> On 13 August 2012 22:58, Stan Hoeppner wrote:
>
>> That being the case I'd suspect something other than server hardware.
>> To be sure, manually remove one node from the cluster and see how long
>> the remaining node runs with
On 13 August 2012 22:58, Stan Hoeppner wrote:
> That being the case I'd suspect something other than server hardware.
> To be sure, manually remove one node from the cluster and see how long
> the remaining node runs without rebooting. If it doesn't reboot at all,
> that
e with us.
> The strange thing is that happens alternately in both nodes.
That being the case I'd suspect something other than server hardware.
To be sure, manually remove one node from the cluster and see how long
the remaining node runs without rebooting. If it doesn't reboot at
>
> Are these controlled shutdowns? Or are these hardware crash/reboots
> that are occurring?
>
> If the former you should see syslog entries for the shutdown sequence.
> If the latter, you won't see anything in the logs. This would suggest
> you've got a hardware problem, and not related to faul
nuous reboots of my two nodes in a
>>>>> heartbeat+pacemaker cluster.
>>>>> Reboots are random, one day they happen one other day not, sometime
>>>>> for 7 days they don't happen, sometimes they happen at night.
>>>>> They happen at ra
On 12 August 2012 20:39, Stan Hoeppner wrote:
> On 8/12/2012 4:44 AM, Mauro wrote:
>> On 11 August 2012 19:23, Stan Hoeppner wrote:
>>> On 8/11/2012 8:59 AM, Mauro wrote:
>>>> Hello, I'm experiencing continuous reboots of my two nodes in a
>>>>
On 8/12/2012 4:44 AM, Mauro wrote:
> On 11 August 2012 19:23, Stan Hoeppner wrote:
>> On 8/11/2012 8:59 AM, Mauro wrote:
>>> Hello, I'm experiencing continuous reboots of my two nodes in a
>>> heartbeat+pacemaker cluster.
>>> Reboots are random, one da
On 11 August 2012 19:23, Stan Hoeppner wrote:
> On 8/11/2012 8:59 AM, Mauro wrote:
>> Hello, I'm experiencing continuous reboots of my two nodes in a
>> heartbeat+pacemaker cluster.
>> Reboots are random, one day they happen one other day not, sometime
>> for 7 d
On 8/11/2012 8:59 AM, Mauro wrote:
> Hello, I'm experiencing continuous reboots of my two nodes in a
> heartbeat+pacemaker cluster.
> Reboots are random, one day they happen one other day not, sometime
> for 7 days they don't happen, sometimes they happen at night.
> They
Hello, I'm experiencing continuous reboots of my two nodes in a
heartbeat+pacemaker cluster.
Reboots are random, one day they happen one other day not, sometime
for 7 days they don't happen, sometimes they happen at night.
They happen at random days and random time.
Nodes are connected
On Mon, Apr 30, 2012 at 02:20:31PM +0200, Frank Van Damme wrote:
> Hi.
>
> I am in the process of setting up a small 2-node HA cluster (for NFS,
> active/passive) with a shared disk for storage. Because Corosync and
> pacemaker look nice and good, I am trying to make it
Hi.
I am in the process of setting up a small 2-node HA cluster (for NFS,
active/passive) with a shared disk for storage. Because Corosync and
pacemaker look nice and good, I am trying to make it work with this
combination instead of cman, which is what clvm is compiled for in Debian.
There are
On 10/16/2011 07:21 AM, Joey L wrote:
> Digimer - thanks for you input - you saved me a ton of time!!!
> I did look at your tutorial -- great stuff BTW.
Thank you. :)
> I thought fencing was an option because I setup RH cluster about 5
> years ago and I thought I did not do it then.
Le Sunday 16 October 2011 13:21:43 Joey L, vous avez écrit :
[...]
> About pacemaker --
> Do I need fencing hardware as well ??
It's better, but optional.
> I just got 2 servers and a regular switch - i think it netgear.
> Like I said earlier - just want the 2 boxes to back up each other.
> I ha
Digimer - thanks for you input - you saved me a ton of time!!!
I did look at your tutorial -- great stuff BTW.
I thought fencing was an option because I setup RH cluster about 5
years ago and I thought I did not do it then..and further in the RHEL
Cluster Administrator had points that it was
On 10/15/2011 09:33 AM, Joey L wrote:
> On Fri, Oct 14, 2011 at 7:25 PM, Walter Hurry wrote:
>> On Fri, 14 Oct 2011 16:58:01 -0400, Joey L wrote:
>>
>>> I am new to redhat cluster and i am having some issues.
>>
>> Why do you keep starting new threads for the
On 10/15/2011 09:30 AM, Joey L wrote:
>>
>> I don't use apache, so I can't speak to that resource agent's config. I can
>> say though that overall it looks okay with two exceptions.
>>
>> You *must* configure fencing for the cluster to work properly. E
On Fri, Oct 14, 2011 at 7:25 PM, Walter Hurry wrote:
> On Fri, 14 Oct 2011 16:58:01 -0400, Joey L wrote:
>
>> I am new to redhat cluster and i am having some issues.
>
> Why do you keep starting new threads for the same basic question? Clearly
> you are some kind of "a
>
> I don't use apache, so I can't speak to that resource agent's config. I can
> say though that overall it looks okay with two exceptions.
>
> You *must* configure fencing for the cluster to work properly. Even without
> shared storage, a node failure will trigg
On Fri, 14 Oct 2011 16:58:01 -0400, Joey L wrote:
> I am new to redhat cluster and i am having some issues.
Why do you keep starting new threads for the same basic question? Clearly
you are some kind of "architect" who is trying to put together a proposal
for some client or othe
don't use apache, so I can't speak to that resource agent's config. I
can say though that overall it looks okay with two exceptions.
You *must* configure fencing for the cluster to work properly. Even
without shared storage, a node failure will trigger a fence call which,
because i
On Fri, Oct 14, 2011 at 4:58 PM, Joey L wrote:
> I am new to redhat cluster and i am having some issues.
>
> 1. I am looking for a simple cluster.conf that I can use for :
> A. failing over an ip address.
> B. failing over apache.
> C. failing over mysql
> D. failing over a
I am new to redhat cluster and i am having some issues.
1. I am looking for a simple cluster.conf that I can use for :
A. failing over an ip address.
B. failing over apache.
C. failing over mysql
D. failing over asterisk.
E. failing over a nfs mount.
I have created the following cluster.conf
Hi,
Does anyone know the current status of packaging mysql cluster for squeeze,
since the feature is forked out in 5.1?
There's an old bug (560244) about this, where Toni Mueller was working on
packaging it but there's been no activity on it since August and there are
no packages
Samba PDC, LDAP con Cluster
Tengo instalado un controlador de dominio con Samba PDC OpenLDAP PAM/NSS, he
buscado alguna configuracion para agregar un BDC pero nada.
Me han dicho que una mejor opcion sería agregar un cluster ya que tiene más
ventajas además de equilibrar las cargas.
Alguna
Has anyone tried to create a cluster using Kerrighed patched Debian kernel
?? I need a kerrighed patched kernel image (along with source, and deb
package)
I would like to get some help on that. Seems that Debian has got very less
support for HPC (except from some wiki pages and packages list)
I
On Thursday 22 October 2009 21:17:05 Brent Clark wrote:
> Hiya
>
> Would anyone be able to make any recommendations for a Cluster Filesystem.
Not a recommendation, exactly, but a thought -- it is possible to
do third-party implementations, you apparently don't have to buy i
On Fri, Oct 23, 2009 at 5:35 AM, Brent Clark wrote:
> On 23/10/2009 06:27, David Brown wrote:
>>
>> You can also checkout Lustre. Its a HPC File System for lots of the
>> Top500.org machines. You can check them out at www.lustre.org iirc
>> they are in debian somewhere.
>>
>> Thanks,
>> - David Br
On 23/10/2009 06:27, David Brown wrote:
You can also checkout Lustre. Its a HPC File System for lots of the
Top500.org machines. You can check them out at www.lustre.org iirc
they are in debian somewhere.
Thanks,
- David Brown
Hiya
Thanks for this.
Will definitely look into it.
Dont lik
make any recommendations for a Cluster Filesystem.
>
> I was looking at GlusterFS, but from what I read via other forums /
> mailinglists, its not ready for production, also its not in Lenny.
>
> I work for a quite a large hosting company, so im looking for something
> stable and scaleable,
> A friend of mine, said I should look at Redhats GFS, but i dont know how the
> co would feel about looking at Redhat.
AFAIK it's not specific to RedHat, they just happen to be the original
designers of it. It should be available under any distribution,
including gNewSense and Debian.
Hiya
Would anyone be able to make any recommendations for a Cluster Filesystem.
I was looking at GlusterFS, but from what I read via other forums /
mailinglists, its not ready for production, also its not in Lenny.
I work for a quite a large hosting company, so im looking for something
On Wed, 8 Apr 2009 18:01:11 +0800
hongyi.z...@gmail.com wrote:
> On Wednesday, April 8, 2009 at 13:48, jsp...@sun.ac.za wrote:
> > On Tue, Apr 07, 2009 at 10:52:50PM +0800, hongyi.z...@gmail.com wrote:
>
> >> I've setup a Debian cluster to construct a HPC worksta
Hello List,
what about SLURM (slurm-llnl Debian package) ?
hth,
Jerome
hongyi.z...@gmail.com wrote:
On Wednesday, April 8, 2009 at 13:48, jsp...@sun.ac.za wrote:
On Tue, Apr 07, 2009 at 10:52:50PM +0800, hongyi.z...@gmail.com wrote:
I've setup a Debian cluster to construct
On Wednesday, April 8, 2009 at 13:48, jsp...@sun.ac.za wrote:
> On Tue, Apr 07, 2009 at 10:52:50PM +0800, hongyi.z...@gmail.com wrote:
>> I've setup a Debian cluster to construct a HPC workstation. Now, I
>> want to install one of the job management system softwares, such as
On Tue, Apr 07, 2009 at 10:52:50PM +0800, hongyi.z...@gmail.com wrote:
> I've setup a Debian cluster to construct a HPC workstation. Now, I
> want to install one of the job management system softwares, such as
> PBS, LSF, or some others.
>
> But I'm a newbie on do t
Dear all,
I've setup a Debian cluster to construct a HPC workstation. Now, I want to
install one of the job management system softwares, such as PBS, LSF, or some
others.
But I'm a newbie on do this job. Who can give me some hints on the following
issues:
1- Which job managem
On Wednesday 03 December 2008, Micha Feigin <[EMAIL PROTECTED]> wrote
about 'Re: Building a cluster with debian?':
>On Wed, 3 Dec 2008 14:18:37 +0200
>
>Micha Feigin <[EMAIL PROTECTED]> wrote:
>> On Wed, 3 Dec 2008 12:40:45 +0200
>>
>> Johann Spi
ftware out there that will do that, I read about
> > > some recently, but can't find reference to it now :(
> >
> > Have a look at http://gridengine.sunsource.net/
> >
>
> Thanks, looks like a glorified pbs mostly but may be of some help. I was
> hoping to
Hello,
I missed the first message:
in case you are looking for a queue manager for cluster:
slurm-llnl (Lenny,Sid) rocks well !
hth,
Jerome
Johann Spies wrote:
On Tue, Dec 02, 2008 at 04:53:45PM +1100, Alex Samad wrote:
I believe there is software out there that will do that, I read about
w :(
>
> Have a look at http://gridengine.sunsource.net/
>
Thanks, looks like a glorified pbs mostly but may be of some help. I was hoping
to somehow abuse numa and processor queues to make the cluster look as a single
numa computer but that seems to require hardware support (multi process
On Tue, Dec 02, 2008 at 04:53:45PM +1100, Alex Samad wrote:
> I believe there is software out there that will do that, I read about
> some recently, but can't find reference to it now :(
Have a look at http://gridengine.sunsource.net/
Regards
Johann
--
Johann Spies Telefoon: 021-808
On Mon, Dec 01, 2008 at 02:13:02AM +0200, Micha Feigin wrote:
> I'm trying to find out if it's possible to build a debian based cluster from 4
> identical quad core machines that would behave as one NUMA machine?
>
> AFAIK we have one such machine based on opetron p
I'm trying to find out if it's possible to build a debian based cluster from 4
identical quad core machines that would behave as one NUMA machine?
AFAIK we have one such machine based on opetron processors in uni but I haven't
been able to find out how that is setup, i.e whether
On Fri, 12 Sep 2008 21:01:38 +0300
Micha <[EMAIL PROTECTED]> wrote:
> On Fri, 12 Sep 2008 09:09:34 +0100
> michael <[EMAIL PROTECTED]> wrote:
>
> >
> > On 12 Sep 2008, at 02:57, Micha wrote:
> >
> > > I'm trying to run intel'
On Fri, 12 Sep 2008 09:09:34 +0100
michael <[EMAIL PROTECTED]> wrote:
>
> On 12 Sep 2008, at 02:57, Micha wrote:
>
> > I'm trying to run intel's cluster openmp on my machine. For some
> > reason it
> > crashes with a SIGBUS (Bus Error) when I
On 12 Sep 2008, at 02:57, Micha wrote:
I'm trying to run intel's cluster openmp on my machine. For some
reason it
crashes with a SIGBUS (Bus Error) when I run it on my machine. The
exact same
executable with the same libraries works fine on a different one
(although it
is itani
I'm trying to run intel's cluster openmp on my machine. For some reason it
crashes with a SIGBUS (Bus Error) when I run it on my machine. The exact same
executable with the same libraries works fine on a different one (although it
is itanium).
I'm guessing that I got some memro
Hello,
I have been configured the drbd+hearbeat.
If the cluster2 (master) is off, the cluster1 (slave) take all resources of
cluster2.
But if cluster2 is up, it receive all resources, and datas, but the datas not
come to cluster2, and the /home is not mounted in cluster1.
How I can to do the
Michael Madden <[EMAIL PROTECTED]>:
>
> Is anyone aware of a project similar to Rocks Clusters
> (http://www.rocksclusters.org/) that uses Debian as it's base? Rocks is
xCAT can do multiple versions of Redhat, so it shouldn't be all that
difficult to slot in Debian. There's a good IBM TechW
oss a
wide range of software and hardware as well as with most of the Distro's
you have listed and many more you didn't. I have not found anything like
Rocks; at least not with the cluster work I do/have done.
Most of the Debian based distros I have worked with are good and get the
job
> -Original Message-
> From: Michael Madden [mailto:[EMAIL PROTECTED]
> Sent: Thursday, July 31, 2008 7:55 AM
> To: debian-user@lists.debian.org
> Subject: HPC Cluster with Debian Etch
>
> Hello:
>
> Is anyone aware of a project similar to Rocks Clusters
>
Hello:
Is anyone aware of a project similar to Rocks Clusters
(http://www.rocksclusters.org/) that uses Debian as it's base? Rocks is
super simple to get setup; I recently setup a 8 node HPC cluster with
Rocks to develop MPI applications in under an hour. However I'd pref
On Jul 28, 7:20 pm, Alex Samad <[EMAIL PROTECTED]> wrote:
> > > > On 07/25/08 16:32, Bob wrote:
[SNIP]
>
> > So I'm really looking for the best open, linux based solution. The
> > more & more I look into this, the more it seems to me like drbd v8
> > running in primary/primary config, over GFS mig
On Mon, Jul 28, 2008 at 10:18:03AM -0700, Bob wrote:
> On Jul 26, 1:50 am, Alex Samad <[EMAIL PROTECTED]> wrote:
> > On Fri, Jul 25, 2008 at 06:09:24PM -0500, Ron Johnson wrote:
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA1
> >
> > > On 07/25/08 16:32, Bob wrote:
[snip]
> >
>
> Alex
On Jul 26, 3:50 am, "James Youngman" <[EMAIL PROTECTED]> wrote:
> On Fri, Jul 25, 2008 at 10:32 PM, Bob <[EMAIL PROTECTED]> wrote:
> > however - you can run drbd in a primary/primary config - this
> > sounds like what I want. But it sounds like I need a clustering files
> > system to do this like
ean rebooting all the VM's.
> > >however - you can run drbd in a primary/primary config - this
> > > sounds like what I want. But it sounds like I need a clustering files
> > > system to do this like GFS. After countless hours researching this,
> > >
at I want. But it sounds like I need a clustering files
> > system to do this like GFS. After countless hours researching this,
> > I'm still not sure how to do it - do I need GFS? OCFS? NBD?
>
> > Now drbd isn't really a cluster, it's just raid1-ing 2 pc's
On Fri, Jul 25, 2008 at 10:32 PM, Bob <[EMAIL PROTECTED]> wrote:
> however - you can run drbd in a primary/primary config - this
> sounds like what I want. But it sounds like I need a clustering files
> system to do this like GFS. After countless hours researching this,
> I'm still not sure how
. But it sounds like I need a clustering files
> > system to do this like GFS. After countless hours researching this,
> > I'm still not sure how to do it - do I need GFS? OCFS? NBD?
> >
> > Now drbd isn't really a cluster, it's just raid1-ing 2 pc's -
an rebooting all the VM's.
>however - you can run drbd in a primary/primary config - this
> sounds like what I want. But it sounds like I need a clustering files
> system to do this like GFS. After countless hours researching this,
> I'm still not sure how to do it - do I
clustering files
system to do this like GFS. After countless hours researching this,
I'm still not sure how to do it - do I need GFS? OCFS? NBD?
Now drbd isn't really a cluster, it's just raid1-ing 2 pc's - this
could be all I need.
But - would a REAL cluster be a better solution? I bel
Benedict Verheyen schreef:
I didn't use ocfs2 nor hostf to do what i wanted.
I moved the homedirectories to the uml and it works for me.
Regards,
Benedict
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Benedict Verheyen schreef:
Hi,
i want to run ocfs2 to load a shared filesystem for 3 uml's.
ocfs2 allows 2 or more uml's to load the same filesystem so it should
solve the limitations of hostfs.
Anyway, i compiled ocfs2 support into the kernel and made a
/etc/ocfs2/cluster.conf file.
Howe
Hi,
i want to run ocfs2 to load a shared filesystem for 3 uml's.
ocfs2 allows 2 or more uml's to load the same filesystem so it should solve
the limitations of hostfs.
Anyway, i compiled ocfs2 support into the kernel and made a
/etc/ocfs2/cluster.conf file.
However, when i want to start th
1 - 100 of 204 matches
Mail list logo