> Am 12.03.21 um 22:51 schrieb ept8e...@secmail.pro:
>> Hi I was reading about how unlock encrypted root partition from remote
>> (unattended). I'd like asking what is compatible way for this in centos
>> and commonly used by administrators?
>>
>> I think most simple is install dropbear in
Am 12.03.21 um 22:51 schrieb ept8e...@secmail.pro:
Hi I was reading about how unlock encrypted root partition from remote
(unattended). I'd like asking what is compatible way for this in centos
and commonly used by administrators?
I think most simple is install dropbear in initramfs for allow
On Fri, 12 Mar 2021 ept8e...@secmail.pro wrote:
Hi I was reading about how unlock encrypted root partition from remote
(unattended). I'd like asking what is compatible way for this in centos
and commonly used by administrators?
I think most simple is install dropbear in initramfs for allow
Hi I was reading about how unlock encrypted root partition from remote
(unattended). I'd like asking what is compatible way for this in centos
and commonly used by administrators?
I think most simple is install dropbear in initramfs for allow remote SSH
and manual enter passphrase. I find many
On 3/12/21 4:45 AM, Kaushal Shriyan wrote:
Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
occupy the remaining free disk space of 60GB?
Can you set up an identical EC2 instance to test the process? I
definitely wouldn't do this on a system with data that you need,
> On Fri, Mar 12, 2021 at 7:59 PM Rainer Duffner
> wrote:
>
>>
>>
>> > Am 12.03.2021 um 15:23 schrieb Thomas Mueller :
>> >
>> > On 3/12/21 1:45 PM, Kaushal Shriyan wrote:
>> >> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G
>> and
>> >> occupy the remaining free disk space
On Fri, Mar 12, 2021 at 7:59 PM Rainer Duffner
wrote:
>
>
> > Am 12.03.2021 um 15:23 schrieb Thomas Mueller :
> >
> > On 3/12/21 1:45 PM, Kaushal Shriyan wrote:
> >> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
> >> occupy the remaining free disk space of 60GB?
> >
>
I think you need policy routing:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-policy-based-routing-to-define-alternative-routes_configuring-and-managing-networking
> Am 12.03.2021 um 15:23 schrieb Thomas Mueller :
>
> On 3/12/21 1:45 PM, Kaushal Shriyan wrote:
>> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
>> occupy the remaining free disk space of 60GB?
>
> parted porbably could do it. there is also a gparted gui
>
On 3/12/21 1:45 PM, Kaushal Shriyan wrote:
Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
occupy the remaining free disk space of 60GB?
parted porbably could do it. there is also a gparted gui
(https://gparted.org/), but doesn't seem to be in CentOS 8.
Maybe boot
> Hi,
>
> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
> occupy the remaining free disk space of 60GB?
>
> [root@ip-10-0-0-218 centos]# df -hT --total
> Filesystem Type Size Used Avail Use% Mounted on
> devtmpfs devtmpfs 1.7G 0 1.7G 0% /dev
>
On 3/12/21 12:25 PM, yf chu wrote:
The applications on all those servers are same. They are working on same data.
I still don't know why the size of buff/cache is different between different
servers.
You might want to check the kernel threads...
If you use md arrays you can have very high load
Hi,
Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
occupy the remaining free disk space of 60GB?
[root@ip-10-0-0-218 centos]# df -hT --total
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 1.7G 0 1.7G 0% /dev
tmpfs
The applications on all those servers are same. They are working on same data.
I still don't know why the size of buff/cache is different between different
servers.
At 2021-03-12 16:35:09, "Simon Matter" wrote:
>Hi,
>
>You said that you have multiple systems running this same
Hi,
You said that you have multiple systems running this same application.
But, do they work with the same data on disk or are there big differences?
From how I understand the figures below, your buff/cache seems a bit low
if you read a lot of data from disk. If you read a lot of data and
On Wed, Feb 17, 2021 at 2:04 AM Kenneth Porter wrote:
>
> --On Tuesday, February 16, 2021 12:00 PM +0530 Thomas Stephen Lee
> wrote:
>
> > The solution should be a software one without acquiring new hardware.
> > What is ideal is the bandwidth of two connections and half bandwidth
> > when one
16 matches
Mail list logo