Re: AWS gp2 -> gp3

2023-04-01 Thread Miroslav Suchý

Dne 01. 04. 23 v 20:27 Kevin Fenzi napsal(a):

Should we stop uploading 'standard' and 'gp2'? I mean, is there any
reason anyone would want to use one of those? I find it just confusing
that there's multiple types for each image. If we can, I'd say we should
just do gp3?


standard is magnetic volume of previous generation

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#vol-type-prev

Yes, this is obsolete and should be replaced by either st1 (better throughput) 
or sc1.

They are super cheap. E.g., sc1 cost $0.015 (compare to gp3 with $0.080). And IMHO st1 or sc1 has sense for rootfs 
volumes on machines where everything is loaded in memory after start. Or for data that are rarely accessed.


The gp2->gp3 is worth doing always. You just need to mind the settings. If the volume is over 1TB you should setup 
throughput and IOPS over the baseline to get same performance. But even for 6TB volumes you get better price for gp3. 
See the table at "Cost comparison between gp2 and gp3 in the us-east-1 (N. Virginia) Region" at the bottom of


https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/

And even for 16TB volumes with maximalized IOPS and Throughput you get 15% 
savings.

You can play with the calculator https://aws.amazon.com/ebs/resources/ (xls 
sheet)


How can you convert an existing machine? Is that something on the aws
web console? or something via the cli? We should definitely convert all
our instances. :)


"How to migrate from gp2 to gp3" chapter from

https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/

You can do this both from WebUI and from command line.


Finally, on timing... we should wait until FESCo approves the change
(but I don't see why they wouldn't).


We do not need to. The change is about default storage of Fedora Cloud images. 
It is not related to what we use in infra :)

Of course we can wait and align if we want to.


  We do go into f38 final freeze on
tuesday next week, but we can try and change it monday? Or just get a
freeze break to push it out after that.


Freeze is next week? Who stole the time?

OK, I think there is no need to rush.

I will migrate Copr machines at the begging of the week. Likely on Monday after I discuss it with the team. And Copr is 
not in the set of machine that we freeze anyway.


And after un-freeze I can migrate the rest. Or we can wait till Fesco approves 
the change and allign with it.

Miroslav
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: AWS gp2 -> gp3

2023-04-04 Thread Miroslav Suchý

Dne 04. 04. 23 v 23:50 Kevin Fenzi napsal(a):

I'm still unsure if that means we only do gp3 or what is the list of
ones we do after this change?


Depends on your use case. For Copr we use gp3 for all volumes but - copr-dist-git (5TB) and copr-be data volume where we 
store dnf repositories (3*12TB). We do not need high IOPS. And Throughput 144/250MB (baseline/boost) is enough for our 
use-case. And sc1 cost only $0.015/GB while gp3 cost $0.08/GB. For our copr-be volumes that makes $520 vs $2880 (per month).


To sum it up:

https://aws.amazon.com/ebs/volume-types/

* sc1 - cheapest volume, good enough if max throughput 250 MB is enough for 
you. $0.015/GB-month

* st1 - still magnetic, IOPS and throughput is two times bigger than sc1, but 
the price is 3times higher: $0.045 GB-month

* gp2 - does not have sense at all now

* gp3 - 3k-16k IOPS, 125MB-1000MB/s Throughput. $0.08 GB-month

* io1 - IMHO does not have sense now

* io2* - with $0.125/GB-month + $0.065/provisioned IOPS-month I never consider 
this, maybe for latency sensitive DB volumes

For rootfs in fedora-infra you want gp3 only.

But generally speaking for public cloud images you want to preserve magnetic volume as well. E.g. my personal webserver 
runs of Fedora cloud image in AWS (under my private account) and because it has just few hits per hour, the st1 volume 
for rootfs and data volume is just fine. And I have 6GB volume running for whole month for just 9 cents.


Miroslav
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: AWS gp2 -> gp3

2023-04-05 Thread Miroslav Suchý

Dne 05. 04. 23 v 19:24 Kevin Fenzi napsal(a):

Currenly we upload 'standard' and 'gp2'.

I failed to find what "standard" means. Likely one of the magnetic ones, but 
don't know which one.

Should we do 'standard' and 'gp3'? or 'sc1' and 'gp3'?



Yes.


I guess thats really for the cloud sig to decide...


Yes.

M.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Lot of snapshots in AWS Sydney region

2023-07-02 Thread Miroslav Suchý
I was going through our spending in AWS and I found that we spent a lot in Sydney region (?). In details most of the 
bill there is because of stored snapshots. There is 7508 of them dated to back to 2013. For volumes that does not exists 
any more.


The only instances (and volumes) we have in Sydney today are:

* mref1.aps2.stream.centos.org

* mref2.apse2.stream.centos.org

With no tags or description.

I can easily remove the acrued snapshots. But I do not know the details. I can setup Recycle Bin rule to delete 
snapshots older than 1-365 days. I can set it to 365. Objections?


And one of the latest snapshot is

snap-04716c8d5d0f23c6e (fedora-coreos-38.20230609.3.0-x86_64) 



which is for volume

vol- 



that does not exist any more. There is more of such snapshots. So whoever is making some process, you are good in 
deleting volumes, but you are leaving snapshots behinds. Likely the ones no one needs.


--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Lot of snapshots in AWS Sydney region

2023-07-12 Thread Miroslav Suchý

Dne 13. 07. 23 v 0:26 Dusty Mabe napsal(a):

If you say it's an issue then hopefully we can give this some priority and get 
to it soon.

Removing the "old ones" isn't that easy to do. Our production AMIs and 
development AMIs are all mixed together so it would be hard to come up with a criteria 
without implementing the garbage collection I linked to above.


But AWS will not allow you to delete snapshot that is associated with AMI (you said). So we can delete everything older 
than one year. And these that will errors are these that we still want to keep. So we will just ignore the errors.


--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Lot of snapshots in AWS Sydney region

2023-07-12 Thread Miroslav Suchý

Dne 11. 07. 23 v 15:53 Dusty Mabe napsal(a):

Apologies for not responding sooner. Actually for some reason this is the first 
email I've seen
in the thread so maybe I need to check my spam filters. Either way, apologies.

The reason you are seeing snapshots but no volumes is because these snapshots 
are used as backing
storage for AMIs. If the snapshot is still associated with an AMI AWS won't let 
you delete it.

On the Fedora CoreOS side we need to implement garbage collection so that we 
delete all AMIs and
snapshots from our development streams. For our production streams we'll 
probably take a more
conservative approach to garbage collection, but we'll need to start garbage 
collecting those too.

For Fedora Cloud, that working group will also need to look at their processes 
and implement garbage
collection too. It could either be a separate process or it could be working 
with you to set a
policy directly in AWS to clean up after some time.

For Fedora CoreOS we'd like to implement the GC outside of AWS since we'd like 
to have the same GC
policy for all clouds we create resources in.


Can you create issue for each of the case. So it does not get lost.


Does this make sense?


Sure. Do you expect this soonish? Or should I manually remove the old ones now?

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Lot of snapshots in AWS Sydney region

2023-07-11 Thread Miroslav Suchý

Dne 05. 07. 23 v 21:17 Kevin Fenzi napsal(a):

On Mon, Jul 03, 2023 at 04:02:02PM +0200, Fabian Arrotin wrote:

On 03/07/2023 06:39, Miroslav Suchý wrote:

I was going through our spending in AWS and I found that we spent a lot
in Sydney region (?). In details most of the bill there is because of
stored snapshots. There is 7508 of them dated to back to 2013. For
volumes that does not exists any more.


Yeah, lets get the Fedora coreos folks to look, it seems like those are
their images. Perhaps they are making them, but not cleaning up old ones
in that region?



It has been week with no response (I know holiday season...). I will give it one more week. If no one raise a voice, I 
will create Recycle-bin rule that will automatically delete **ALL** volume snapshots older than one year. In ALL AWS 
regions where we have some snapshots. I will work on that next Monday.


If you need to preserve some snapshot longer than one year, please let me know.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


AWS cleanup - what to delete next?

2024-02-08 Thread Miroslav Suchý

Yesterday I finally deleted all Fedora-AtomicHost AMIs and associated snapshots 
(it took whole night to finish).

This time, I know we have to start with AMIs first (and only then delete 
snapshots).

Where I can continue witht the cleanup? There is several dozen thousand of AMIs. At the end of this email I will give 
random sample from the list.


I am very afraid of deleting something that is still currently in use and that 
is somewhere listed as golden image.

Or we do not care about anything but images of stable Fedoras and everything that matches 'Fedora.*-X-.*' where X is 
number bellow < 38?


Miroslav


 * ami-26c3b846 
Fedora-Cloud-Base-24-20160601.n.0.x86_64-us-west-1-HVM-standard-0
 * ami-05fd9470f34e60fdc Fedora-Cloud-Base-39-1.1.aarch64-hvm-us-west-1-gp3-0
 * ami-062b2a2f58fdaf72f fedora-coreos-36.20220820.2.0-aarch64
 * ami-f4bbff94 Fedora-Cloud-Atomic-23-20160626.x86_64-us-west-1-HVM-standard-0
 * ami-31da9d51 Fedora-Atomic-24-20160706.0.x86_64-us-west-1-HVM-gp2-0
 * ami-0bd20e7557d7bdbb6 
Fedora-Cloud-Base-29-20190726.0.aarch64-hvm-us-west-1-standard-0
 * ami-20a3a640 Fedora-Atomic-27-20171211.0.x86_64-us-west-1-HVM-gp2-0
 * ami-fab2c59a Fedora-Cloud-Base-23-20160127.2.x86_64-us-west-1-PV-standard-0
 * ami-981812f8 Fedora-Cloud-Base-27-20180303.0.x86_64-us-west-1-HVM-gp2-0
 * ami-2134cb65 Fedora-Cloud-Base-23_Alpha-20150806.2.x86_64-us-west-1-HVM-gp2-0
 * ami-0b8037b9dcf49e74e fedora-coreos-39.20231101.1.0-x86_64
 * ami-312c0f51 Fedora-Atomic-25-20170601.0.x86_64-us-west-1-HVM-gp2-0
 * ami-bacaccda Fedora-Atomic-26-20171226.0.x86_64-us-west-1-HVM-gp2-0
 * ami-0b363a6b Fedora-Cloud-Base-26-20180129.0.x86_64-us-west-1-HVM-gp2-0
 * ami-b2909cd2 
Fedora-Atomic-Rawhide-20180131.n.0.x86_64-us-west-1-HVM-standard-0
 * ami-04165988107e37bff 
Fedora-Cloud-Base-28-20190510.0.x86_64-hvm-us-west-1-standard-0
 * ami-0b4577fc26dfa0a41 fedora-coreos-38.20230722.1.0-aarch64
 * ami-0f46b6ee06ec9518a fedora-coreos-36.20220505.2.0-x86_64
 * ami-3a30315a Fedora-Cloud-Base-26-20180101.0.x86_64-us-west-1-HVM-standard-0
 * ami-089d45c0dde166fe0 
Fedora-Cloud-Base-30-20190728.0.aarch64-hvm-us-west-1-gp2-0
 * ami-eb84ba8b 
Fedora-Atomic-Rawhide-20171112.n.0.x86_64-us-west-1-HVM-standard-0
 * ami-05254bd86175b9023 fedora-coreos-35.20220424.3.0-x86_64
 * ami-f5ecdc95 Fedora-Atomic-26-20171001.0.x86_64-us-west-1-HVM-gp2-0
 * ami-c2ab83a2 Fedora-Cloud-Base-25-20170802.0.x86_64-us-west-1-HVM-standard-0
 * ami-df3367bf Fedora-Atomic-25-20161121.0.x86_64-us-west-1-HVM-standard-0
 * ami-0419c7c8be07bf733 
Fedora-Cloud-Base-30-20190419.n.0.x86_64-hvm-us-west-1-standard-0
 * ami-c20612a2 Fedora-Atomic-27-20180314.0.x86_64-us-west-1-HVM-standard-0
 * ami-a4192ec4 Fedora-Atomic-26-20170908.0.x86_64-us-west-1-HVM-gp2-0
 * ami-9dc1f1fd Fedora-Cloud-Base-27-20171001.n.2.x86_64-us-west-1-HVM-gp2-0
 * ami-d90255b9 Fedora-Cloud-Base-25-20161130.1.x86_64-us-west-1-PV-gp2-0
 * ami-72747712 
Fedora-Cloud-Base-Rawhide-20180110.n.0.x86_64-us-west-1-HVM-gp2-0
 * ami-885052e8 
Fedora-Cloud-Base-Rawhide-20180115.n.0.x86_64-us-west-1-HVM-gp2-0
 * ami-5f5c643f 
Fedora-Cloud-Base-Rawhide-20171118.n.1.x86_64-us-west-1-HVM-gp2-0
 * ami-4b115a2b Fedora-Cloud-Base-24-20161021.0.x86_64-us-west-1-PV-standard-0
 * ami-fa40379a Fedora-Cloud-Base-23-20160127.1.x86_64-us-west-1-HVM-standard-0
 * ami-04aacbcff5e34df3b 
Fedora-Cloud-Base-32-20200605.0.x86_64-hvm-us-west-1-gp2-0
 * ami-04f2a5462112fc48f fedora-coreos-36.20220820.3.0-x86_64
 * ami-9c5a20fc Fedora-Cloud-Base-23-20160605.x86_64-us-west-1-HVM-gp2-0
 * ami-50c5f530 Fedora-Cloud-Base-25-20171002.0.x86_64-us-west-1-PV-standard-0
 * ami-db6f6fbb Fedora-Atomic-Rawhide-20180105.n.0.x86_64-us-west-1-HVM-gp2-0
 * ami-06ce8f3b14afca272 
Fedora-Cloud-Base-30-20190323.n.0.aarch64-hvm-us-west-1-standard-0
 * ami-0fc1ba6f Fedora-Atomic-24-20160601.n.0.x86_64-us-west-1-HVM-gp2-0
 * ami-6c54550c 
Fedora-Atomic-Rawhide-20171228.n.0.x86_64-us-west-1-HVM-standard-0
 * ami-a2591ac2 Fedora-Cloud-Base-24-20160809.0.x86_64-us-west-1-PV-standard-0
 * ami-b31743d3 Fedora-Cloud-Base-25-20161119.3.x86_64-us-west-1-PV-standard-0
 * ami-df4f6cbf Fedora-Atomic-26_Beta-1.3.x86_64-us-west-1-HVM-standard-0
 * ami-e790d387 Fedora-Cloud-Base-24-20160811.0.x86_64-us-west-1-PV-standard-0
 * ami-b3ddc9d3 Fedora-Cloud-Base-27-20180315.0.x86_64-us-west-1-HVM-gp2-0
 * ami-273c7947 Fedora-Cloud-Base-23-20160613.x86_64-us-west-1-PV-gp2-0
 * ami-594c1b39 Fedora-Cloud-Base-24-20161203.0.x86_64-us-west-1-PV-standard-0
 * ami-203a3e40 Fedora-Cloud-Base-27-20171214.0.x86_64-us-west-1-HVM-gp2-0
 * ami-b3b388d3 Fedora-Atomic-27-20171129.0.x86_64-us-west-1-HVM-standard-0
 * ami-d1e1f0b1 Fedora-Cloud-Base-28-20180331.n.1.x86_64-us-west-1-HVM-gp2-0
 * ami-063bd28aeead1fb4e 
Fedora-Cloud-Base-30-20190509.0.x86_64-hvm-us-west-1-gp2-0
 * ami-43464723 Fedora-Atomic-26-20171229.0.x86_64-us-west-1-HVM-standard-0
 * ami-6b38160b Fedora-Cloud-Base-24-20170713.0.x86_64-us-west-1-PV-gp2-0
 * ami-012a0519ada69f3f9 

Re: AWS cleanup - what to delete next?

2024-02-08 Thread Miroslav Suchý

Dne 08. 02. 24 v 17:09 Miroslav Suchý napsal(a):
Where I can continue witht the cleanup? There is several dozen thousand of AMIs. At the end of this email I will give 
random sample from the list.


For the record, here is complete list containing 150k lines [9MB]

https://k00.fr/s7xl2itj

--

Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Not tagged resource in AWS

2024-02-07 Thread Miroslav Suchý

This is a resource from AWS that does not have propper tag:

Region: eu-west-1
Volumes - [id name (attached to instance, owner)]:
  * vol-0e5efafe67ed944ad N/A (apps-containerization, N/A)

Can the owner please tag (or delete) it?

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: AWS cleanup - what to delete next?

2024-02-12 Thread Miroslav Suchý

Dne 09. 02. 24 v 20:34 Miroslav Suchý napsal(a):

I think we should leave "GA" images. Even thought they are EOL for the
most part, I think it's still possibly nice to be able to spin one up to
test something or the like. We can find the names on our download
server, ie,

https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/35/Cloud/x86_64/images/
Fedora-Cloud-Base-35-1.2 is the GA for fedora 35 cloud.


Nod. I was about to ask how can I find them... but the name match nicely. And going manualy over 35 names is likely 
not big deal.


I will tag them. Then they disappear from my radar.

I propose tag

FedoraGroup=ga-archives

Any objections?


I tagged all GA images with this ^^^ tag.

I went from Fedora 39 down to Fedora 19. But I did not find any image for Fedora 19 and 20 (that is year 2013) so I 
stopped there.


I label AMIs and associated snapshots.

For the record, this is the the script I used for labeling the AMI in all regions 
https://github.com/xsuchy/fedora-infra-scripts/blob/main/label-ami.py



Who is responsible for uploading Fedora Cloud images to AWS? Fedora Cloud SIG? Somebody else? I want to make sure that 
consequent GA images, will be properly tagged.



--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: HEADS UP - deletion of Fedora Atomic Hosts AMIs

2024-02-07 Thread Miroslav Suchý

Dne 09. 12. 23 v 9:21 Miroslav Suchý napsal(a):

As mentioned in previous thread - I plan to delete from AWS all Fedora Atomic 
Hosts and related snapshots.

Atomic Host EOLed at 2019 and we have the images stored elsewhere too.

To protect Christmas calm period. I will delete is no sooner than 2024-01-09.


It took a bit longer, but I finally get to this.

All AMIs with name 'Fedora-AtomicHost-*' are deleted. It was 12 074 AMIs.

For the record I used this script 
https://github.com/xsuchy/fedora-infra-scripts/blob/main/delete-amis.py

The associated snapshots are being deleted right now.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Different owner of some Fedora-Cloud-Base images in AWS?

2024-02-12 Thread Miroslav Suchý

I was wondering why I cannot tag some images in AWS and I found that some GA 
images in AWS have different owner.

I.e. all our images has

Owner account ID 125523088429

But e.g. ami-0e4e634d022c1a3f8 in ap-southeast-4 region has owner id 569228561889. There are more such cases, but it 
seems quite random.


To see this AMI in WebUI you have to switch from "AMIs owned by me" to "Public 
images".

Is this expected? Is this some malicious thing?

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: AWS cleanup - what to delete next?

2024-02-09 Thread Miroslav Suchý

Dne 09. 02. 24 v 18:27 Kevin Fenzi napsal(a):

I think we should leave "GA" images. Even thought they are EOL for the
most part, I think it's still possibly nice to be able to spin one up to
test something or the like. We can find the names on our download
server, ie,

https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/35/Cloud/x86_64/images/
Fedora-Cloud-Base-35-1.2 is the GA for fedora 35 cloud.


Nod. I was about to ask how can I find them... but the name match nicely. And going manualy over 35 names is likely not 
big deal.


I will tag them. Then they disappear from my radar.

I propose tag

FedoraGroup=ga-archives

Any objections?


We should exclude all 'current' releases (ie, 38/39/40)


*nod*



We should exclude "Rawhide" ones that are 2024? I don't think we need to
keep all the old ones there. We have them koji if we really need them.
(At least the last month or two)

*nod*

I am unsure about the CentOS ones. We should check with them on that.
I want to put CentOS aside for now. There is "only" 2k out of 145k that are related to centos. We can work on them in a 
later step.

Would it be worth it to rename the ones we plan to delete with a 'about
to delete' name, wait a while and then delete? Or is there any way to
tell who/how many people are using a ami?


All operation are reference to AMI are using ami-id. If I change name, likely 
no one will a notice.

But I can do what I did with volumes - first tag it with FedoraGroup=garbage-collector and only then delete it. This can 
lower the human error on my side.


I can rename too. That will be no problem for me, but I would not bet on that 
somebody notice of diferent name.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


HEADS UP - deletion of Fedora Atomic Hosts AMIs

2023-12-09 Thread Miroslav Suchý

As mentioned in previous thread - I plan to delete from AWS all Fedora Atomic 
Hosts and related snapshots.

Atomic Host EOLed at 2019 and we have the images stored elsewhere too.

To protect Christmas calm period. I will delete is no sooner than 2024-01-09.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


AWS usage per group (December)

2024-01-01 Thread Miroslav Suchý

Here comes December edition of resources running in AWS. It's a snapshot of 
resources running today.


FedoraGroup: infra
 Region: ap-south-1
   Instance Type: c5d.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 8 GiB
 Region: eu-central-1
   Instance Type: c5d.xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Instance Type: m5.xlarge - Count: 1
   Instance Type: m6gd.4xlarge - Count: 1
   Volume Type: gp3 - Total Size: 16223 GiB
 Region: us-west-1
   Instance Type: t3.medium - Count: 1
   Volume Type: gp3 - Total Size: 40 GiB
 Region: us-west-2
   Instance Type: m5.large - Count: 4
   Instance Type: m6g.large - Count: 1
   Instance Type: t3.large - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Instance Type: c6gd.xlarge - Count: 1
   Volume Type: standard - Total Size: 100 GiB
   Volume Type: gp3 - Total Size: 900 GiB
 Region: af-south-1
   Instance Type: c5d.xlarge - Count: 2
   Volume Type: gp3 - Total Size: 25 GiB
 Region: eu-west-2
   Instance Type: c5d.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 20 GiB
 Region: eu-west-1
   Instance Type: m4.10xlarge - Count: 1
 Region: ap-northeast-2
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 196 GiB
 Region: sa-east-1
   Instance Type: c5.2xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 200 GiB
 Region: ap-southeast-1
   Instance Type: c5d.xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 108 GiB
 Region: us-east-1
   Instance Type: t3.medium - Count: 1
   Instance Type: t3.small - Count: 1
   Instance Type: c5.xlarge - Count: 2
   Instance Type: t2.medium - Count: 2
   Instance Type: c5.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 330 GiB
 Region: us-east-2
   Instance Type: c5d.large - Count: 1
   Instance Type: t2.micro - Count: 1
   Instance Type: t3.medium - Count: 1
   Volume Type: gp3 - Total Size: 63 GiB

FedoraGroup: min
 Region: eu-central-1
   Instance Type: t3.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 1000 GiB

FedoraGroup: centos-stream-build
 Region: us-east-2
   Instance Type: t2.micro - Count: 1

FedoraGroup: respins
 Region: us-east-1
   Volume Type: gp3 - Total Size: 500 GiB

FedoraGroup: abrt
 Region: us-east-1
   Instance Type: t3a.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 10 GiB

FedoraGroup: garbage-collector
 Region: ap-southeast-1
   Volume Type: standard - Total Size: 6 GiB

FedoraGroup: copr
 Region: us-east-1
   Instance Type: t3a.medium - Count: 5
   Instance Type: t3a.small - Count: 1
   Instance Type: t3a.xlarge - Count: 1
   Instance Type: c7a.4xlarge - Count: 1
   Instance Type: m5a.4xlarge - Count: 1
   Instance Type: t3a.2xlarge - Count: 1
   Instance Type: c7g.xlarge - Count: 84
   Instance Type: c7i.xlarge - Count: 18
   Instance Type: c7a.large - Count: 1
   Volume Type: st1 - Total Size: 7000 GiB
   Volume Type: gp3 - Total Size: 5824 GiB
   Volume Type: io2 - Total Size: 20 GiB
   Volume Type: sc1 - Total Size: 81652 GiB

FedoraGroup: centos-stream-osci
 Region: ca-central-1
   Instance Type: m5d.large - Count: 5
   Instance Type: t2.micro - Count: 1
   Volume Type: gp3 - Total Size: 36 GiB

FedoraGroup: ci
 Region: us-east-1
   Instance Type: c5.2xlarge - Count: 3
   Volume Type: gp3 - Total Size: 1540 GiB
 Region: us-east-2
   Instance Type: m5a.4xlarge - Count: 1
   Instance Type: r5.large - Count: 4
   Instance Type: i3.2xlarge - Count: 5
   Instance Type: r5.xlarge - Count: 2
   Instance Type: c6g.large - Count: 2
   Instance Type: c6g.xlarge - Count: 1
   Instance Type: c6a.large - Count: 35
   Instance Type: c6a.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 11498 GiB
   Volume Type: gp2 - Total Size: 100 GiB

FedoraGroup: centos
 Region: ap-south-1
   Instance Type: t3.2xlarge - Count: 3
   Instance Type: t3.large - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4570 GiB
 Region: eu-central-1
   Instance Type: t2.small - Count: 1
   Instance Type: t2.large - Count: 2
   Instance Type: r5b.8xlarge - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Instance Type: t3.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 13200 GiB
 Region: us-west-2
   Instance Type: m6i.2xlarge - Count: 2
   Instance Type: t3.small - Count: 1
   Instance Type: t3.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 13145 GiB
 Region: af-south-1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4050 GiB
 Region: eu-west-3
   Instance Type: t2.large - Count: 2
   Instance Type: t2.2xlarge - Count: 1
   Instance Type: t3a.xlarge 

Heads up - AWS Snapshots cleanup

2023-11-28 Thread Miroslav Suchý

HEADS UP

On Friday I plan to delete all Snapshots that were created before 2019 and do 
not have tag FedoraGroup.

On next Friday (2023-12-01) I plan to delete all Snapshots that were created in 2021 and earlier and do not have tag 
Fedora Group.


This will be done in all regions.

As mentioned in different thread in this ML we have ten thousands of snapshots and it is impossible to evaluate each of 
them separately.


If you are aware of something that should be preserved, let me know. Or better, 
tag it.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Heads up - AWS Snapshots cleanup

2023-11-28 Thread Miroslav Suchý

Dne 28. 11. 23 v 23:28 Sandro napsal(a):

On 28-11-2023 15:54, Miroslav Suchý wrote:

On Friday I plan to delete all Snapshots that were created before 2019 and do 
not have tag FedoraGroup.

On next Friday (2023-12-01) I plan to delete all Snapshots that were created in 2021 and earlier and do not have tag 
Fedora Group.


So, that's two different clean up activities on the same day? Or has an error crept in, since ("On Friday" == "next 
Friday (2023-12-01)") && ("created before 2019" != "created in 2021 and earlier"?



Ah, I screwed the dates, It should be:

On Friday (2023-12-01) I plan to delete all Snapshots that were created before 
2019 and do not have tag FedoraGroup.

On next Friday (2023-12-08) I plan to delete all Snapshots that were created in 2021 and earlier and do not have tag 
Fedora Group.




--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: AWS Snapshots without FedoraGroup tag

2023-11-23 Thread Miroslav Suchý

Dne 09. 11. 23 v 20:39 Kevin Fenzi napsal(a):

Well, actually, we should probibly check in on the thing thats cleaning
up the amis? and confirm that it is deleting the snapshots?

I think that is this:
roles/fedimg/templates/clean-amis.py
in ansible.

and it does delete the snapshot... so, perhaps indeed all these ones
with vol- are some mistake or some other amis?


I had time to investigate it a bit:

I deleted one of the ancient snapshot (from 2018) and AWS did not object. So it is not base image for current AMI 
(otherwise AWS would refuse to delete it).


The snapshots has Description like:

  Copied for DestinationAmi ami-052b0ac13b1043c97 from SourceAmi ami-0d9943288750067d3 for SourceSnapshot 
snap-0a92565926bd815be. Task created on 1,700,729,192,462.


I **think** this description is made when you copy snapshot between regions.

I investigated one of today's such snapshot:

https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#SnapshotDetails:snapshotId=snap-0d77b2029ae9cdfd7

  (the description of this snapshot is the one cited above)

and the associated AMI exists. It is

https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#ImageDetails:imageId=ami-052b0ac13b1043c97

with name

Fedora-Cloud-Base-Rawhide-20231123.n.0.aarch64-hvm-ap-south-1-gp3-0

So this images are really leftover from creating nightly AMIs.

I checked the

  roles/fedimg/templates/clean-amis.py
and I think it does not work at all. For two reasons:
 1) We have active AMIs that have DeprecationTime se to 2022/08/11 and they are 
not deleted. So this is likely a date when deleting AMIs stopped working. But 
the snapshots deleting likely never worked.
 2) The code query AMIs withFilters=[{"Name": "tag-key", "Values": ["LaunchPermissionRevoked"]}] but as I see this is not tag, but different 
attribute. But anyway the snapshots were not deleted anyway. There is likely a bug I do not see now.


I tried to delete one of the old snapshots that is still used as base for 
active AMI (F27) and AWS refused with message:

  Failed to delete snapshot.
    snap-0b271f1b25a3f9b47: The snapshot snap-0b271f1b25a3f9b47 is currently in 
use by ami-4ba98e24

Based on this founding I propose:

1) Delete **all** snapshots without FedoraGroup tag older than - let say - 2021. This way we can actually review if 
there are some snapshots other than leftovers form clean-amis that is worth preserving. But right now I am unable to 
review manually anything. If the snapshot will be linked to live AMI then AWS refuse to delete it and I will ignore such 
errors. If there will be no objection I will top post this as separate headsup email.


2) Open ticket that owners of fedimg should fix the tooling to delete the 
snapshots

3) Open tickets that owners of fedimg should delete cleanup AMIs with 
Deprecation time lower than todays date.


--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


AWS usage per group (November)

2023-12-02 Thread Miroslav Suchý

Here comes November edition of resources running in AWS. It's a snapshot of 
resources running today.

FedoraGroup: respins

 Region: us-east-1
   Volume Type: gp3 - Total Size: 500 GiB

FedoraGroup: ci
 Region: us-east-1
   Instance Type: c5.2xlarge - Count: 3
   Volume Type: gp3 - Total Size: 1540 GiB
 Region: us-east-2
   Instance Type: m5a.4xlarge - Count: 1
   Instance Type: r5.large - Count: 4
   Instance Type: i3.2xlarge - Count: 6
   Instance Type: r5.xlarge - Count: 2
   Instance Type: c5.2xlarge - Count: 1
   Instance Type: c6a.large - Count: 17
   Volume Type: gp3 - Total Size: 11654 GiB

FedoraGroup: centos-stream-build
 Region: us-east-2
   Instance Type: t2.micro - Count: 1

FedoraGroup: garbage-collector
 Region: ap-southeast-1
   Volume Type: standard - Total Size: 6 GiB

FedoraGroup: centos-stream-osci
 Region: ca-central-1
   Instance Type: m5d.large - Count: 5
   Instance Type: t2.micro - Count: 1
   Volume Type: gp3 - Total Size: 36 GiB

FedoraGroup: abrt
 Region: us-east-1
   Instance Type: t3a.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 10 GiB

FedoraGroup: infra
 Region: ap-south-1
   Instance Type: c5d.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 8 GiB
 Region: eu-central-1
   Instance Type: c5d.xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Instance Type: m5.xlarge - Count: 1
   Instance Type: m6gd.4xlarge - Count: 1
   Volume Type: gp3 - Total Size: 16223 GiB
 Region: us-west-1
   Instance Type: t3.medium - Count: 1
   Volume Type: gp3 - Total Size: 40 GiB
 Region: us-west-2
   Instance Type: m5.large - Count: 4
   Instance Type: m6g.large - Count: 1
   Instance Type: t3.large - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Instance Type: c6gd.xlarge - Count: 1
   Volume Type: standard - Total Size: 100 GiB
   Volume Type: gp3 - Total Size: 900 GiB
 Region: af-south-1
   Instance Type: c5d.xlarge - Count: 2
   Volume Type: gp3 - Total Size: 25 GiB
 Region: eu-west-2
   Instance Type: c5d.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 20 GiB
 Region: ap-northeast-2
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 196 GiB
 Region: sa-east-1
   Instance Type: c5.2xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 200 GiB
 Region: ap-southeast-1
   Instance Type: c5d.xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 108 GiB
 Region: us-east-1
   Instance Type: t3.medium - Count: 1
   Instance Type: t3.small - Count: 1
   Instance Type: c5.xlarge - Count: 2
   Instance Type: t2.medium - Count: 2
   Instance Type: c5.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 330 GiB
 Region: us-east-2
   Instance Type: c5d.large - Count: 1
   Instance Type: t2.micro - Count: 1
   Instance Type: t3.medium - Count: 1
   Volume Type: gp3 - Total Size: 63 GiB

FedoraGroup: min
 Region: eu-central-1
   Instance Type: t3.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 1000 GiB

FedoraGroup: copr
 Region: us-east-1
   Instance Type: t3a.xlarge - Count: 6
   Instance Type: m5a.4xlarge - Count: 2
   Instance Type: c7i.xlarge - Count: 14
   Instance Type: c7g.xlarge - Count: 89
   Instance Type: t3a.medium - Count: 5
   Instance Type: t3a.small - Count: 1
   Instance Type: t2.medium - Count: 1
   Volume Type: st1 - Total Size: 6000 GiB
   Volume Type: gp3 - Total Size: 4172 GiB
   Volume Type: io2 - Total Size: 20 GiB
   Volume Type: sc1 - Total Size: 81652 GiB

FedoraGroup: centos
 Region: ap-south-1
   Instance Type: t3.2xlarge - Count: 3
   Instance Type: t3.large - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4570 GiB
 Region: eu-central-1
   Instance Type: t2.small - Count: 1
   Instance Type: t2.large - Count: 2
   Instance Type: r5b.8xlarge - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Instance Type: t3.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 13200 GiB
 Region: us-west-2
   Instance Type: m6i.2xlarge - Count: 2
   Instance Type: t3.small - Count: 1
   Instance Type: t3.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 13145 GiB
 Region: af-south-1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4050 GiB
 Region: eu-west-3
   Instance Type: t2.large - Count: 2
   Instance Type: t2.2xlarge - Count: 1
   Instance Type: t3a.xlarge - Count: 1
   Instance Type: t2.small - Count: 2
   Instance Type: t3.xlarge - Count: 1
   Instance Type: t3.large - Count: 1
   Volume Type: gp3 - Total Size: 2917 GiB
 Region: eu-west-2
   Instance Type: t2.small - Count: 1
   Instance Type: t2.large - Count: 1

Re: Heads up - AWS Snapshots cleanup

2023-12-02 Thread Miroslav Suchý

Dne 29. 11. 23 v 8:36 Miroslav Suchý napsal(a):

On Friday (2023-12-01) I plan to delete all Snapshots that were created before 
2019 and do not have tag FedoraGroup.

On next Friday (2023-12-08) I plan to delete all Snapshots that were created in 2021 and earlier and do not have tag 
Fedora Group.


I started with the cleanup. I started with 2018 and older. The script is running right now. It found lots of snapshots 
in eu-central-1. But surprisingly just few snapshots in ap-south-1 - most snapshots AWS rejected to delete because they 
were linked to AMI - mostly Fedora-AtomicHost-29-* We have plenty of them. See this screenshot:


  https://k00.fr/o0hxjyd0

I wonder - do we have written retention policy for our images? Do we want to 
keep the old one? Public ones? Private ones?

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Heads up - AWS Snapshots cleanup

2023-12-03 Thread Miroslav Suchý

Dne 29. 11. 23 v 8:36 Miroslav Suchý napsal(a):

On Friday (2023-12-01) I plan to delete all Snapshots that were created before 
2019 and do not have tag FedoraGroup.


This phase has finished. Any snapshot older than 2019 that remained is because AWS refused to delete it - likely because 
it is associated with AMI.


For the record this is the script I used:

https://github.com/xsuchy/fedora-infra-scripts/blob/main/delete-snapshots.py

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Heads up - AWS Snapshots cleanup

2023-12-02 Thread Miroslav Suchý

Dne 02. 12. 23 v 22:45 Miroslav Suchý napsal(a):
I wonder - do we have written retention policy for our images? Do we want to keep the old one? Public ones? Private ones? 


It seems that Fedora Atomic Host is EOLed since 2019-11-26 
https://projectatomic.io/blog/2019/11/fedora-atomic-host-nearing-eol/


The images for historical purposes are available at 
https://dl.fedoraproject.org/pub/alt/atomic/stable/

So it seems to me that we can safely delete all AMIs with name 
"Fedora-AtomicHost*'

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


AWS usage per group (Janurary)

2024-02-05 Thread Miroslav Suchý

Here comes January edition of resources running in AWS. It's a snapshot of 
resources running today.

FedoraGroup: garbage-collector
 Region: ap-southeast-1
   Volume Type: standard - Total Size: 6 GiB

FedoraGroup: min
 Region: eu-central-1
   Instance Type: t3.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 1000 GiB

FedoraGroup: respins
 Region: us-east-1
   Volume Type: gp3 - Total Size: 500 GiB

FedoraGroup: copr
 Region: us-east-1
   Instance Type: t3a.medium - Count: 4
   Instance Type: r5a.large - Count: 1
   Instance Type: t3a.small - Count: 1
   Instance Type: r7a.xlarge - Count: 1
   Instance Type: c7a.4xlarge - Count: 1
   Instance Type: m5a.4xlarge - Count: 1
   Instance Type: t3a.2xlarge - Count: 1
   Instance Type: c7i.xlarge - Count: 25
   Instance Type: c7a.large - Count: 1
   Instance Type: c7g.xlarge - Count: 36
   Volume Type: st1 - Total Size: 7000 GiB
   Volume Type: gp3 - Total Size: 7484 GiB
   Volume Type: io2 - Total Size: 20 GiB
   Volume Type: sc1 - Total Size: 81652 GiB

FedoraGroup: centos-stream-osci
 Region: ca-central-1
   Instance Type: m5d.large - Count: 5
   Instance Type: t2.micro - Count: 1
   Volume Type: gp3 - Total Size: 36 GiB

FedoraGroup: abrt
 Region: us-east-1
   Instance Type: t3a.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 10 GiB

FedoraGroup: centos
 Region: ap-south-1
   Instance Type: t3.2xlarge - Count: 3
   Instance Type: t3.large - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4570 GiB
 Region: eu-central-1
   Instance Type: t2.small - Count: 1
   Instance Type: t2.large - Count: 2
   Instance Type: r5b.8xlarge - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Instance Type: t3.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 13200 GiB
 Region: us-west-2
   Instance Type: m6i.2xlarge - Count: 2
   Instance Type: t3.small - Count: 1
   Instance Type: t3.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 13145 GiB
 Region: af-south-1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4050 GiB
 Region: eu-west-3
   Instance Type: t2.large - Count: 2
   Instance Type: t2.2xlarge - Count: 1
   Instance Type: t3a.xlarge - Count: 1
   Instance Type: t2.small - Count: 2
   Instance Type: t3.xlarge - Count: 1
   Instance Type: t3.large - Count: 1
   Volume Type: gp3 - Total Size: 2917 GiB
 Region: eu-west-2
   Instance Type: t2.small - Count: 1
   Instance Type: t2.large - Count: 1
   Instance Type: t3a.large - Count: 2
   Instance Type: r5a.8xlarge - Count: 1
   Instance Type: t3.large - Count: 2
   Instance Type: t3.xlarge - Count: 1
   Instance Type: t2.medium - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 26320 GiB
 Region: eu-west-1
   Instance Type: t2.medium - Count: 4
   Instance Type: t2.xlarge - Count: 3
   Instance Type: t2.large - Count: 1
   Instance Type: t2.small - Count: 2
   Instance Type: t3.large - Count: 1
   Instance Type: t3.medium - Count: 2
   Volume Type: gp3 - Total Size: 584 GiB
 Region: ap-northeast-1
   Instance Type: c6g.2xlarge - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 8070 GiB
 Region: sa-east-1
   Instance Type: c6g.4xlarge - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4250 GiB
 Region: ap-southeast-1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4050 GiB
 Region: ap-southeast-2
   Instance Type: m6i.2xlarge - Count: 2
   Volume Type: gp3 - Total Size: 8100 GiB
 Region: us-east-1
   Instance Type: t3.xlarge - Count: 2
   Volume Type: gp3 - Total Size: 26350 GiB
 Region: us-east-2
   Instance Type: t2.2xlarge - Count: 1
   Instance Type: m5a.2xlarge - Count: 1
   Instance Type: t3.xlarge - Count: 4
   Instance Type: t3a.large - Count: 1
   Instance Type: m6i.2xlarge - Count: 2
   Instance Type: t2.large - Count: 2
   Instance Type: t3.large - Count: 1
   Instance Type: m4.2xlarge - Count: 1
   Instance Type: t3.small - Count: 1
   Instance Type: t3.medium - Count: 1
   Volume Type: gp3 - Total Size: 46090 GiB

FedoraGroup: ci
 Region: us-east-1
   Instance Type: c5.2xlarge - Count: 3
   Volume Type: gp3 - Total Size: 1540 GiB
 Region: us-east-2
   Instance Type: m5a.4xlarge - Count: 1
   Instance Type: r5.large - Count: 4
   Instance Type: i3.2xlarge - Count: 5
   Instance Type: r5.xlarge - Count: 2
   Instance Type: c6g.large - Count: 5
   Instance Type: c6g.xlarge - Count: 1
   Instance Type: c6a.large - Count: 74
   Instance Type: c6a.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 11598 GiB
   Volume Type: 

AWS cleanup

2023-11-21 Thread Miroslav Suchý
During the autumn cleanup I deleted several abandoned instances and volumes. I made a snapshot of each such volume and 
tagged it with FedoraGroup=garbage-collector and then deleted these volumes. So far no one objected that he is missing 
something. I plan to delete these snapshots with FedoraGroup=garbage-collector at the end of month.


--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Deleting old AMIs in AWS

2024-04-23 Thread Miroslav Suchý

Dne 23. 04. 24 v 3:31 odp. Dusty Mabe napsal(a):

If you don't mind give us until next week to finish up loose ends on this. We 
are already
adding the FedoraGroup=coreos tag to our snapshots/AMIs as we create them now, 
but we are
working on a script to go apply that tag to our existing images.

https://github.com/coreos/fedora-coreos-tracker/issues/1605#issuecomment-2052373124


Sure no problem.

And thank you for working on this.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Untagged resources in AWS

2024-05-01 Thread Miroslav Suchý

This is without AMIs and Snapshots that still produce looong list.

If you are an owner add `FedoraGroup` tag to this resource. See 
https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/aws-access/#_role_and_user_policies


Region: us-east-1
Instances: (name, id, owner)
 * N/A (i-002b91fad28adbbd8, AutoScaling)
 * N/A (i-07b7d3193d4da2e41, AutoScaling)
 * N/A (i-08af34b6226aa62ac, AutoScaling)
 * N/A (i-0377c4527e994c5f1, AutoScaling)
Volumes - [id name (attached to instance, owner)]:
 * vol-052687f8fc3b09bc6 
kubernetes-dynamic-pvc-c0f128bf-d91e-4f5f-8c2c-ad169c7811a1 (N/A, 
1698424739242419174)
 * vol-0db7472b86bc75c0d 
testing-farm-staging-dynamic-pvc-3b8e8c59-79c2-45cb-b095-6f8976dd8ad9 (N/A, 
i-02deca5894e37e763)
 * vol-08d6d089c71689be0 
testing-farm-staging-dynamic-pvc-fe268915-6619-4b57-b643-9dedb91686cc (N/A, 
i-02deca5894e37e763)
 * vol-05f2d4ffcb1d6708a 
testing-farm-staging-dynamic-pvc-f2e08d70-785c-45ad-b387-a8b97bbcd996 (N/A, 
i-0bdbc9b1dd2b92bb7)
 * vol-036333a0a4cb38717 
testing-farm-staging-dynamic-pvc-7452b624-d6d0-4db9-8c05-b46662a6fd2e (N/A, 
i-0bdbc9b1dd2b92bb7)
 * vol-0a00f64279cae3e91 
testing-farm-staging-dynamic-pvc-13c2885c-67f1-4af3-93c6-b1cfa9312907 (N/A, 
i-0647c2571ea378711)
 * vol-0835dcd77e77790ad 
testing-farm-staging-dynamic-pvc-23419a97-cee0-4191-90b5-d10eb48ef527 (N/A, 
i-0647c2571ea378711)
 * vol-08f137aab87ab99ba 
testing-farm-staging-dynamic-pvc-54723197-b84c-4396-ba77-f1b1e29841f1 (N/A, 
i-00c59b26ed8cd6e33)
 * vol-07baa567c53a92ac9 
testing-farm-staging-dynamic-pvc-bbb2827a-c0f0-4d8f-bf1c-fe380f888916 (N/A, 
i-00c59b26ed8cd6e33)
 * vol-0d563a307b69275a4 
testing-farm-staging-dynamic-pvc-cfdbb6e1-f8f0-4fc9-8377-17e074d9f652 (N/A, 
i-0db3ea8ad4268b8f7)
 * vol-00ed882e7434da7c2 
testing-farm-staging-dynamic-pvc-d80f8118-a0fd-4c5a-ab10-69021532f3cc (N/A, 
i-0db3ea8ad4268b8f7)
 * vol-07f98d6f028c7aa27 N/A (, AutoScaling)
 * vol-04872a5e8007defdf testing-farm-production-02-dynamic-pvc-1f4758fc-88a7-4200-80d2-a38330a531ee (N/A, 
i-042b55378a5d2d2bc)
 * vol-034a0fab73efebb14 testing-farm-production-02-dynamic-pvc-fc894aa6-6b15-435c-8bbd-1a63c1c6fd3f (N/A, 
i-07ae8bf089c014f74)
 * vol-072635ac62b1d563b testing-farm-production-02-dynamic-pvc-aa66fa4f-d57b-433c-9ef4-f74ac67e13a6 (N/A, 
i-07ae8bf089c014f74)

 * vol-08837b9e8065f3ba9 N/A (, AutoScaling)
 * vol-0b54269f866bdb9ac 
testing-farm-production-dynamic-pvc-e23318b2-820f-4225-a6bf-93f9235a2c44 (, 
i-002b91fad28adbbd8)
 * vol-0e1c648e79153a998 
testing-farm-staging-dynamic-pvc-451e7361-7c58-41d1-9dd5-8b75b4cbd150 (N/A, 
i-0a67e06b991868827)
 * vol-03a736af9a4244a86 
testing-farm-staging-dynamic-pvc-cb13f8bc-fa89-47e0-be38-9461d1749792 (N/A, 
i-0a67e06b991868827)
 * vol-08b5b1bbe77563f68 N/A (qa-openqa-webserver, dbrouwer)
 * vol-0ec1ee7bcca7efd45 N/A (qa-openqa-worker-1, dbrouwer)
 * vol-018523624ce1c4a01 N/A (qa-openqa-webserver-staging, dbrouwer)
 * vol-0c3f00a4952ffb6b0 N/A (qa-openqa-worker-staging, dbrouwer)
 * vol-01a544583f710548c N/A (qa-openqa-worker-2, dbrouwer)
 * vol-0ad288770f8f68790 N/A (qa-openqa-worker-3, dbrouwer)
 * vol-059204a5ea591cd70 N/A (qa-openqa-worker-4, dbrouwer)
 * vol-023a6fc87412597a5 N/A (qa-openqa-worker-5, dbrouwer)
 * vol-0fa66045aa3c995b5 N/A (pyai.fedorainfracloud.org, bstinson)
 * vol-03eeca1a8d9e066b8 N/A (pyai.fedorainfracloud.org, bstinson)
 * vol-0cb9992d3fc7af2bb N/A (temp-pyai-pytorch-builder, bstinson)
 * vol-0683ebc07a920dca7 
testing-farm-staging-dynamic-pvc-84c19b77-081f-4ab1-89d9-de8f0a19209b (N/A, 
i-088625e35e83fd5e2)
 * vol-0c6f89d67ba46ead8 
testing-farm-staging-dynamic-pvc-0b94b80d-e4ed-4dae-8d6c-48fa9c87ebe0 (N/A, 
i-088625e35e83fd5e2)
 * vol-03d2a579a524b926e 
testing-farm-staging-dynamic-pvc-cc116c00-5508-493d-9b65-f99ad1ae0a2d (N/A, 
i-088625e35e83fd5e2)
 * vol-043ff9903a82867a1 
testing-farm-staging-dynamic-pvc-a805cb91-d435-4087-a022-b6c163953b77 (N/A, 
i-088625e35e83fd5e2)
 * vol-0421a320e41736a22 
testing-farm-staging-dynamic-pvc-5c0ff3f8-72a5-4ace-b67a-840cae72d19e (N/A, 
i-088625e35e83fd5e2)
 * vol-0fcfffcef3379f2fb 
testing-farm-staging-dynamic-pvc-01141aee-de71-4019-86c9-05614d1b9910 (N/A, 
i-088625e35e83fd5e2)
 * vol-0bdfc560e445a1ca0 
testing-farm-staging-dynamic-pvc-f32da186-4d88-45b9-9241-965ec3c667ae (N/A, 
i-088625e35e83fd5e2)
 * vol-0cfb615f6df385ec9 
testing-farm-staging-dynamic-pvc-2690e9c5-a1d6-4fce-9a4f-e15f107d596a (N/A, 
i-088625e35e83fd5e2)
 * vol-08e2039df396915ca 
testing-farm-staging-dynamic-pvc-45f52ccd-7db3-4409-8b60-a797dcb803ca (N/A, 
i-088625e35e83fd5e2)
 * vol-0a124a343d41f4897 
testing-farm-staging-dynamic-pvc-ea887ba0-7fc4-4729-b175-9d23f3f67e87 (N/A, 
i-088625e35e83fd5e2)
 * vol-01cbe75dda3583e65 
testing-farm-staging-dynamic-pvc-8bf6ff0a-34ff-492d-97c1-1081105d12b8 (N/A, 
i-088625e35e83fd5e2)
 * vol-06c4a3139d48e96f6 
testing-farm-staging-dynamic-pvc-54955031-3560-434e-9e33-9b9d55b66de1 (N/A, 
i-088625e35e83fd5e2)
 * vol-0944aae869650d726 
testing-farm-staging-dynamic-pvc-0cbf4046-e507-4dda-ae2a-a46439da18ab (N/A, 

AWS usage per group (April)

2024-05-01 Thread Miroslav Suchý

Here comes April edition of resources running in AWS. It's a snapshot of 
resources running today.

FedoraGroup: Not tagged
 Region: ap-south-1
   Service Name:
   # of AMIs: 1302
 Region: eu-south-1
   Service Name:
   # of AMIs: 102
 Region: il-central-1
   Service Name:
   # of AMIs: 85
 Region: ca-central-1
   Service Name:
   # of AMIs: 1316
 Region: eu-central-1
   Service Name:
   # of AMIs: 1338
 Region: us-west-1
   Service Name:
   # of AMIs: 1344
 Region: us-west-2
   Service Name:
   # of AMIs: 1339
 Region: af-south-1
   Service Name:
   # of AMIs: 104
 Region: eu-north-1
   Service Name:
   # of AMIs: 103
 Region: eu-west-3
   Service Name:
   # of AMIs: 104
 Region: eu-west-2
   Service Name:
   # of AMIs: 1354
 Region: eu-west-1
   Service Name:
   # of AMIs: 1308
 Region: ap-northeast-2
   Service Name:
   # of AMIs: 1303
 Region: me-south-1
   Service Name:
   # of AMIs: 102
 Region: ap-northeast-1
   Service Name:
   # of AMIs: 1341
 Region: sa-east-1
   Service Name:
   # of AMIs: 1320
 Region: ap-east-1
   Service Name:
   # of AMIs: 103
 Region: ap-southeast-1
   Service Name:
   # of AMIs: 1326
 Region: ap-southeast-2
   Service Name:
   # of AMIs: 1317
 Region: us-east-1
   Service Name:
   Instance Type: c6a.2xlarge - Count: 4
   Volume Type: gp3 - Total Size: 500 GiB
   Volume Type: gp2 - Total Size: 500 GiB
   # of AMIs: 1584
 Region: us-east-2
   Service Name:
   Instance Type: c5.2xlarge - Count: 2
   Instance Type: p3.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 200 GiB
   Volume Type: gp2 - Total Size: 500 GiB
   # of AMIs: 840
   Service Name: Artemis

   # of AMIs: 166


FedoraGroup: pyai
 Region: us-east-1
   Service Name:
   Instance Type: t2.large - Count: 1
   Instance Type: i3.16xlarge - Count: 1

FedoraGroup: ga-archives
 Region: ap-south-1
   Service Name:
   # of AMIs: 47
 Region: eu-south-1
   Service Name:
   # of AMIs: 6
 Region: il-central-1
   Service Name:
   # of AMIs: 4
 Region: ca-central-1
   Service Name:
   # of AMIs: 47
 Region: eu-central-1
   Service Name:
   # of AMIs: 69
 Region: us-west-1
   Service Name:
   # of AMIs: 69
 Region: us-west-2
   Service Name:
   # of AMIs: 69
 Region: af-south-1
   Service Name:
   # of AMIs: 8
 Region: eu-north-1
   Service Name:
   # of AMIs: 6
 Region: eu-west-3
   Service Name:
   # of AMIs: 12
 Region: eu-west-2
   Service Name:
   # of AMIs: 47
 Region: eu-west-1
   Service Name:
   # of AMIs: 69
 Region: ap-northeast-2
   Service Name:
   # of AMIs: 47
 Region: me-south-1
   Service Name:
   # of AMIs: 6
 Region: ap-northeast-1
   Service Name:
   # of AMIs: 69
 Region: sa-east-1
   Service Name:
   # of AMIs: 69
 Region: ap-east-1
   Service Name:
   # of AMIs: 6
 Region: ap-southeast-1
   Service Name:
   # of AMIs: 69
 Region: ap-southeast-2
   Service Name:
   # of AMIs: 69
 Region: us-east-1
   Service Name:
   # of AMIs: 69
 Region: us-east-2
   Service Name:
   # of AMIs: 47

FedoraGroup: infra
 Region: ap-south-1
   Service Name:
   Instance Type: c5d.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 8 GiB
 Region: eu-central-1
   Service Name:
   Instance Type: c5d.xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Instance Type: m5.xlarge - Count: 1
   Instance Type: m6gd.4xlarge - Count: 1
   Volume Type: gp3 - Total Size: 16000 GiB
 Region: us-west-1
   Service Name:
   Instance Type: t3.medium - Count: 1
   Instance Type: t2.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 80 GiB
 Region: us-west-2
   Service Name:
   Instance Type: m5.large - Count: 4
   Instance Type: m6g.large - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Instance Type: c6gd.xlarge - Count: 1
   Volume Type: standard - Total Size: 100 GiB
   Volume Type: gp3 - Total Size: 100 GiB
 Region: af-south-1
   Service Name:
   Instance Type: c5d.xlarge - Count: 2
   Volume Type: gp3 - Total Size: 10 GiB
 Region: eu-west-2
   Service Name:
   Instance Type: c5d.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 20 GiB
 Region: eu-west-1
   Service Name:
   Instance Type: m4.10xlarge - Count: 1
   Volume Type: gp3 - Total Size: 20 GiB
 Region: ap-northeast-2
   Service Name:
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 190 GiB
 Region: sa-east-1
   Service Name:
   Instance Type: c5.2xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 100 GiB
 Region: ap-southeast-1
   Service Name:
   Instance Type: c5n.2xlarge - Count: 2
   Volume Type: gp3 - Total Size: 50 GiB
 Region: us-east-1
   Service Name:
   Instance Type: t3.medium - Count: 1
   Instance Type: t3.small - Count: 1
   Instance Type: c5.xlarge - 

Re: AWS usage per group (April)

2024-05-02 Thread Miroslav Suchý

Dne 02. 05. 24 v 7:25 dop. Miroslav Suchý napsal(a):

Here comes April edition of resources running in AWS. It's a snapshot of 
resources running today.


I had a bug in script and size of volumes were incorrectly calculated. Here is 
fixed version:

FedoraGroup: abrt
 Region: us-east-1
   Service Name:
   Instance Type: t3a.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 10 GiB

FedoraGroup: centos
 Region: ap-south-1
   Service Name:
   Instance Type: t3.2xlarge - Count: 3
   Instance Type: t3.large - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4570 GiB
 Region: eu-central-1
   Service Name:
   Instance Type: t2.large - Count: 1
   Instance Type: r5b.8xlarge - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Instance Type: t3.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 13170 GiB
 Region: us-west-2
   Service Name:
   Instance Type: m6i.2xlarge - Count: 2
   Instance Type: t3.small - Count: 1
   Instance Type: t3.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 13145 GiB
 Region: af-south-1
   Service Name:
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4050 GiB
 Region: eu-west-3
   Service Name:
   Instance Type: t2.2xlarge - Count: 1
   Instance Type: t3a.xlarge - Count: 1
   Instance Type: t2.large - Count: 1
   Instance Type: t2.small - Count: 2
   Instance Type: t3.xlarge - Count: 1
   Instance Type: t3.large - Count: 1
   Volume Type: gp3 - Total Size: 2917 GiB
 Region: eu-west-2
   Service Name:
   Instance Type: t2.large - Count: 1
   Instance Type: t3a.large - Count: 2
   Instance Type: r5a.8xlarge - Count: 1
   Instance Type: t3.large - Count: 2
   Instance Type: t3.xlarge - Count: 1
   Instance Type: t2.medium - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 26300 GiB
 Region: eu-west-1
   Service Name:
   Instance Type: t2.medium - Count: 4
   Instance Type: t2.xlarge - Count: 3
   Instance Type: t2.large - Count: 1
   Instance Type: t2.small - Count: 2
   Instance Type: t3.large - Count: 1
   Instance Type: t3.medium - Count: 2
   Volume Type: gp3 - Total Size: 584 GiB
 Region: ap-northeast-1
   Service Name:
   Instance Type: c6g.2xlarge - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 8070 GiB
 Region: sa-east-1
   Service Name:
   Instance Type: c6g.4xlarge - Count: 1
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4250 GiB
 Region: ap-southeast-1
   Service Name:
   Instance Type: m6i.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 4050 GiB
 Region: ap-southeast-2
   Service Name:
   Instance Type: m6i.2xlarge - Count: 2
   Volume Type: gp3 - Total Size: 8100 GiB
 Region: us-east-1
   Service Name:
   Instance Type: t3.xlarge - Count: 2
   Instance Type: t2.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 26350 GiB
 Region: us-east-2
   Service Name:
   Instance Type: t2.2xlarge - Count: 1
   Instance Type: m5a.2xlarge - Count: 1
   Instance Type: t3.xlarge - Count: 4
   Instance Type: t3a.large - Count: 1
   Instance Type: m6i.2xlarge - Count: 2
   Instance Type: t2.large - Count: 2
   Instance Type: t3.large - Count: 1
   Instance Type: m4.2xlarge - Count: 1
   Instance Type: t3.small - Count: 1
   Instance Type: t3.medium - Count: 1
   Volume Type: gp3 - Total Size: 50090 GiB

FedoraGroup: Not tagged
 Region: ap-south-1
   Service Name:
   # of AMIs: 1306
 Region: eu-south-1
   Service Name:
   # of AMIs: 102
 Region: il-central-1
   Service Name:
   # of AMIs: 85
 Region: ca-central-1
   Service Name:
   # of AMIs: 1320
 Region: eu-central-1
   Service Name:
   # of AMIs: 1342
 Region: us-west-1
   Service Name:
   # of AMIs: 1348
 Region: us-west-2
   Service Name:
   # of AMIs: 1343
 Region: af-south-1
   Service Name:
   # of AMIs: 104
 Region: eu-north-1
   Service Name:
   # of AMIs: 103
 Region: eu-west-3
   Service Name:
   # of AMIs: 104
 Region: eu-west-2
   Service Name:
   # of AMIs: 1358
 Region: eu-west-1
   Service Name:
   # of AMIs: 1312
 Region: ap-northeast-2
   Service Name:
   # of AMIs: 1307
 Region: me-south-1
   Service Name:
   # of AMIs: 102
 Region: ap-northeast-1
   Service Name:
   # of AMIs: 1345
 Region: sa-east-1
   Service Name:
   # of AMIs: 1324
 Region: ap-east-1
   Service Name:
   # of AMIs: 103
 Region: ap-southeast-1
   Service Name:
   # of AMIs: 1330
 Region: ap-southeast-2
   Service Name:
   # of AMIs: 1321
 Region: us-east-1
   Service Name:
   Instance Type: c6a.2xlarge - Count: 4
   Volume Type: gp3 - Total Size: 2882 GiB
   # of AMIs: 1588
 Region: us-east-2
   Service Name:
   Instance Type: c5.2xlarge

Deleting old AMIs in AWS

2024-03-14 Thread Miroslav Suchý

FYI I plan to continue in AWS cleanup on Friday.

I waited till Freeze is over - just to be safe. And now I want to delete the old AMIs. Likely in several waves. Going 
from oldest to ~2021.


In this step I plan to keep the associated snapshots. So if I break something 
we can still restore the AMI.


BTW - quick summary where we are with the cleanup stuff:

* all VM, volumes have tag FedoraGroup

* all gp2 volumes are migrated to gp3

* All AMIs with name 'Fedora-AtomicHost-*' are deleted. Including associated 
snapshots

* all Fedora GA AMIs and snapshots are tagged with FedoraGroup.

* all old (2021-) snapshots with no associated AMIs are deleted.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


AWS usage per group (March)

2024-04-01 Thread Miroslav Suchý

Here comes January edition of resources running in AWS. It's a snapshot of 
resources running today.

Per request of Miro Vadkerti I grouped it by (FedoraGroup, region, ServiceName). I will try to make it more compact next 
time, but giving up now as it already cost me half of the night.



FedoraGroup: centos-stream-build
  Region: us-east-2
    Service Name:
    Instance Type: t2.micro - Count: 1

FedoraGroup: garbage-collector
  Region: ap-south-1
    Service Name:
  Region: ca-central-1
    Service Name:
  Region: us-west-1
    Service Name:
  Region: us-west-2
    Service Name:
  Region: eu-west-1
    Service Name:
  Region: ap-northeast-1
    Service Name:
  Region: sa-east-1
    Service Name:
  Region: ap-southeast-1
    Service Name:
    Volume Type: standard - Total Size: 6 GiB
  Region: ap-southeast-2
    Service Name:
  Region: us-east-1
    Service Name:
  Region: us-east-2
    Service Name:

FedoraGroup: copr
  Region: us-east-1
    Service Name:
    Instance Type: t3a.medium - Count: 3
    Instance Type: r5a.large - Count: 1
    Instance Type: t3a.small - Count: 1
    Instance Type: r7a.xlarge - Count: 1
    Instance Type: c7a.4xlarge - Count: 1
    Instance Type: m5a.4xlarge - Count: 1
    Instance Type: t3a.2xlarge - Count: 1
    Instance Type: c7i.xlarge - Count: 100
    Instance Type: c7g.xlarge - Count: 72
    Instance Type: c7a.large - Count: 1
    Volume Type: st1 - Total Size: 500 GiB
    Volume Type: gp3 - Total Size: 6 GiB
    Volume Type: io2 - Total Size: 20 GiB
    Volume Type: sc1 - Total Size: 16000 GiB
    Volume Type: gp2 - Total Size: 20 GiB
    # of AMIs: 66
  Region: us-east-2
    Service Name:

FedoraGroup: abrt
  Region: us-east-1
    Service Name:
    Instance Type: t3a.2xlarge - Count: 1
    Volume Type: gp3 - Total Size: 10 GiB

FedoraGroup: ga-archives
  Region: ap-south-1
    Service Name:
    # of AMIs: 41
  Region: ca-central-1
    Service Name:
    # of AMIs: 41
  Region: eu-central-1
    Service Name:
    # of AMIs: 63
  Region: us-west-1
    Service Name:
    # of AMIs: 63
  Region: us-west-2
    Service Name:
    # of AMIs: 63
  Region: af-south-1
    Service Name:
    # of AMIs: 2
  Region: eu-west-3
    Service Name:
    # of AMIs: 6
  Region: eu-west-2
    Service Name:
    # of AMIs: 41
  Region: eu-west-1
    Service Name:
    # of AMIs: 63
  Region: ap-northeast-2
    Service Name:
    # of AMIs: 41
  Region: ap-northeast-1
    Service Name:
    # of AMIs: 63
  Region: sa-east-1
    Service Name:
    # of AMIs: 63
  Region: ap-southeast-1
    Service Name:
    # of AMIs: 63
  Region: ap-southeast-2
    Service Name:
    # of AMIs: 63
  Region: us-east-1
    Service Name:
    # of AMIs: 63
  Region: us-east-2
    Service Name:
    # of AMIs: 41

FedoraGroup: centos-stream-osci
  Region: ca-central-1
    Service Name:
    Instance Type: m5d.large - Count: 5
    Instance Type: t2.micro - Count: 1
    Volume Type: gp3 - Total Size: 6 GiB

FedoraGroup: respins
  Region: us-east-1
    Service Name:
    Volume Type: gp3 - Total Size: 250 GiB

FedoraGroup: min
  Region: eu-central-1
    Service Name:
    Instance Type: t3.2xlarge - Count: 1
    Volume Type: gp3 - Total Size: 500 GiB

FedoraGroup: qa
  Region: us-east-1
    Service Name:
    Instance Type: c5n.metal - Count: 11
    Instance Type: c6a.8xlarge - Count: 2
    Volume Type: gp3 - Total Size: 100 GiB

FedoraGroup: infra
  Region: ap-south-1
    Service Name:
    Instance Type: c5d.xlarge - Count: 1
    Volume Type: gp3 - Total Size: 8 GiB
  Region: eu-central-1
    Service Name:
    Instance Type: c5d.xlarge - Count: 1
    Instance Type: c5n.2xlarge - Count: 1
    Instance Type: m5.xlarge - Count: 1
    Instance Type: m6gd.4xlarge - Count: 1
    Volume Type: gp3 - Total Size: 16000 GiB
  Region: us-west-1
    Service Name:
    Instance Type: t3.medium - Count: 1
    Instance Type: t2.xlarge - Count: 1
    Volume Type: gp3 - Total Size: 40 GiB
  Region: us-west-2
    Service Name:
    Instance Type: m5.large - Count: 4
    Instance Type: m6g.large - Count: 1
    Instance Type: c5n.2xlarge - Count: 1
    Instance Type: c6gd.xlarge - Count: 1
    Volume Type: standard - Total Size: 100 GiB
    Volume Type: gp3 - Total Size: 100 GiB
  Region: af-south-1
    Service Name:
    Instance Type: c5d.xlarge - Count: 2
    Volume Type: gp3 - Total Size: 10 GiB
  Region: eu-west-2
    Service Name:
    Instance Type: c5d.xlarge - Count: 1
    Volume Type: gp3 - Total Size: 20 GiB
  Region: eu-west-1
    Service Name:
    Instance Type: m4.10xlarge - Count: 1
    Volume Type: gp3 - Total Size: 20 GiB
  Region: ap-northeast-2
    Service Name:
    Instance Type: c5n.2xlarge - Count: 1
    Volume Type: gp3 - Total Size: 190 GiB
  Region: 

Re: Deleting old AMIs in AWS

2024-04-02 Thread Miroslav Suchý

Dne 02. 04. 24 v 7:45 odp. Kevin Fenzi napsal(a):

On Tue, Apr 02, 2024 at 07:13:56AM +0200, Miroslav Suchý wrote:

Dne 14. 03. 24 v 9:58 dop. Miroslav Suchý napsal(a):

FYI I plan to continue in AWS cleanup on Friday.

I waited till Freeze is over - just to be safe. And now I want to delete
the old AMIs. Likely in several waves. Going from oldest to ~2021.

I deleted all AMIs that does not have tag FedoraGroup and that were older than 
2019-01-01.

For the record, the list of deleted AMIs is in attachement. And the script
that I used is
https://github.com/xsuchy/fedora-infra-scripts/blob/main/delete-old-amis.py

The script deregistered 36996 AMIs. The associated snapshots still exists.

Hurray!

Thanks again for doing this.


You are welcome. But I have to say I am scared.
I just ask myself: and centos AMIs are stored under which account?

Our account, is the answer!

So, I have just tagged all AMIs from

https://www.centos.org/download/aws-images/

with FedoraGroup=ga-archives

Any idea if I missed something else before I start deleting the more recent 
ones?


--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Untagged resources in AWS

2024-04-01 Thread Miroslav Suchý

This is without AMIs and Snapshots that still produce looong list.

Region: us-west-1
Volumes - [id name (attached to instance, owner)]:
 * vol-0d7702fbe7ab94c6f N/A (famna.fedorainfracloud.org, N/A)

Region: us-west-2
Instances: (name, id, owner)
 * openscanhub-test (i-0c32e3d4eff4bf1a4, N/A)
Volumes - [id name (attached to instance, owner)]:
 * vol-0e9ad438b3cf1e5b9 N/A (openscanhub-test, N/A)

Region: us-east-1
Instances: (name, id, owner)
 * N/A (i-002b91fad28adbbd8, AutoScaling)
 * N/A (i-02ee729f353a2c926, AutoScaling)
 * N/A (i-0377c4527e994c5f1, AutoScaling)
Volumes - [id name (attached to instance, owner)]:
 * vol-052687f8fc3b09bc6 
kubernetes-dynamic-pvc-c0f128bf-d91e-4f5f-8c2c-ad169c7811a1 (N/A, 
1698424739242419174)
 * vol-0db7472b86bc75c0d 
testing-farm-staging-dynamic-pvc-3b8e8c59-79c2-45cb-b095-6f8976dd8ad9 (N/A, 
i-02deca5894e37e763)
 * vol-08d6d089c71689be0 
testing-farm-staging-dynamic-pvc-fe268915-6619-4b57-b643-9dedb91686cc (N/A, 
i-02deca5894e37e763)
 * vol-05f2d4ffcb1d6708a 
testing-farm-staging-dynamic-pvc-f2e08d70-785c-45ad-b387-a8b97bbcd996 (N/A, 
i-0bdbc9b1dd2b92bb7)
 * vol-036333a0a4cb38717 
testing-farm-staging-dynamic-pvc-7452b624-d6d0-4db9-8c05-b46662a6fd2e (N/A, 
i-0bdbc9b1dd2b92bb7)
 * vol-0a00f64279cae3e91 
testing-farm-staging-dynamic-pvc-13c2885c-67f1-4af3-93c6-b1cfa9312907 (N/A, 
i-0647c2571ea378711)
 * vol-0835dcd77e77790ad 
testing-farm-staging-dynamic-pvc-23419a97-cee0-4191-90b5-d10eb48ef527 (N/A, 
i-0647c2571ea378711)
 * vol-08f137aab87ab99ba 
testing-farm-staging-dynamic-pvc-54723197-b84c-4396-ba77-f1b1e29841f1 (N/A, 
i-00c59b26ed8cd6e33)
 * vol-07baa567c53a92ac9 
testing-farm-staging-dynamic-pvc-bbb2827a-c0f0-4d8f-bf1c-fe380f888916 (N/A, 
i-00c59b26ed8cd6e33)
 * vol-0d563a307b69275a4 
testing-farm-staging-dynamic-pvc-cfdbb6e1-f8f0-4fc9-8377-17e074d9f652 (N/A, 
i-0db3ea8ad4268b8f7)
 * vol-00ed882e7434da7c2 
testing-farm-staging-dynamic-pvc-d80f8118-a0fd-4c5a-ab10-69021532f3cc (N/A, 
i-0db3ea8ad4268b8f7)
 * vol-04872a5e8007defdf testing-farm-production-02-dynamic-pvc-1f4758fc-88a7-4200-80d2-a38330a531ee (N/A, 
i-042b55378a5d2d2bc)
 * vol-034a0fab73efebb14 testing-farm-production-02-dynamic-pvc-fc894aa6-6b15-435c-8bbd-1a63c1c6fd3f (N/A, 
i-07ae8bf089c014f74)
 * vol-072635ac62b1d563b testing-farm-production-02-dynamic-pvc-aa66fa4f-d57b-433c-9ef4-f74ac67e13a6 (N/A, 
i-07ae8bf089c014f74)

 * vol-08837b9e8065f3ba9 N/A (, AutoScaling)
 * vol-0b54269f866bdb9ac 
testing-farm-production-dynamic-pvc-e23318b2-820f-4225-a6bf-93f9235a2c44 (, 
i-002b91fad28adbbd8)
 * vol-0e1c648e79153a998 
testing-farm-staging-dynamic-pvc-451e7361-7c58-41d1-9dd5-8b75b4cbd150 (N/A, 
i-0a67e06b991868827)
 * vol-03a736af9a4244a86 
testing-farm-staging-dynamic-pvc-cb13f8bc-fa89-47e0-be38-9461d1749792 (N/A, 
i-0a67e06b991868827)
 * vol-0683ebc07a920dca7 
testing-farm-staging-dynamic-pvc-84c19b77-081f-4ab1-89d9-de8f0a19209b (N/A, 
i-088625e35e83fd5e2)
 * vol-0c6f89d67ba46ead8 
testing-farm-staging-dynamic-pvc-0b94b80d-e4ed-4dae-8d6c-48fa9c87ebe0 (N/A, 
i-088625e35e83fd5e2)
 * vol-03d2a579a524b926e 
testing-farm-staging-dynamic-pvc-cc116c00-5508-493d-9b65-f99ad1ae0a2d (N/A, 
i-088625e35e83fd5e2)
 * vol-043ff9903a82867a1 
testing-farm-staging-dynamic-pvc-a805cb91-d435-4087-a022-b6c163953b77 (N/A, 
i-088625e35e83fd5e2)
 * vol-0421a320e41736a22 
testing-farm-staging-dynamic-pvc-5c0ff3f8-72a5-4ace-b67a-840cae72d19e (N/A, 
i-088625e35e83fd5e2)
 * vol-0fcfffcef3379f2fb 
testing-farm-staging-dynamic-pvc-01141aee-de71-4019-86c9-05614d1b9910 (N/A, 
i-088625e35e83fd5e2)
 * vol-0bdfc560e445a1ca0 
testing-farm-staging-dynamic-pvc-f32da186-4d88-45b9-9241-965ec3c667ae (N/A, 
i-088625e35e83fd5e2)
 * vol-0cfb615f6df385ec9 
testing-farm-staging-dynamic-pvc-2690e9c5-a1d6-4fce-9a4f-e15f107d596a (N/A, 
i-088625e35e83fd5e2)
 * vol-08e2039df396915ca 
testing-farm-staging-dynamic-pvc-45f52ccd-7db3-4409-8b60-a797dcb803ca (N/A, 
i-088625e35e83fd5e2)
 * vol-0a124a343d41f4897 
testing-farm-staging-dynamic-pvc-ea887ba0-7fc4-4729-b175-9d23f3f67e87 (N/A, 
i-088625e35e83fd5e2)
 * vol-01cbe75dda3583e65 
testing-farm-staging-dynamic-pvc-8bf6ff0a-34ff-492d-97c1-1081105d12b8 (N/A, 
i-088625e35e83fd5e2)
 * vol-06c4a3139d48e96f6 
testing-farm-staging-dynamic-pvc-54955031-3560-434e-9e33-9b9d55b66de1 (N/A, 
i-088625e35e83fd5e2)
 * vol-0944aae869650d726 
testing-farm-staging-dynamic-pvc-0cbf4046-e507-4dda-ae2a-a46439da18ab (N/A, 
i-088625e35e83fd5e2)
 * vol-00b3a1696f98cdeae 
testing-farm-staging-dynamic-pvc-852bd828-de88-4093-aaa7-0d8fa33e6964 (N/A, 
i-088625e35e83fd5e2)
 * vol-0a3824de4e6a63993 
testing-farm-staging-dynamic-pvc-46325ad3-985a-41c1-9baa-5ff001de781d (N/A, 
i-088625e35e83fd5e2)
 * vol-01d4f439abfbf8710 
testing-farm-staging-dynamic-pvc-b1e6a99a-23e2-498e-a11c-54e914add65b (N/A, 
i-088625e35e83fd5e2)
 * vol-031bd4c8f17c3c5f9 
testing-farm-staging-dynamic-pvc-d9f83744-46e0-4c42-ba55-6cba09b0da43 (N/A, 
i-088625e35e83fd5e2)
 * vol-07ddbd814ee4ace30 
testing-farm-staging-dynamic-pvc-4efd1c16-c160-4578-a27f-e6e78c3aea02 (N/A, 

Re: Deleting old AMIs in AWS

2024-04-04 Thread Miroslav Suchý

Dne 04. 04. 24 v 4:06 dop. Dusty Mabe napsal(a):

If you don't mind let me try to get the FedoraGroup tag added to our CoreOS 
AMIs first.

I'll try to set up something to get that done this week.


Sure. I pause any deletion till end of the current freeze (scheduled to 
2024-04-16).

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


AWS usage per group (February)

2024-03-01 Thread Miroslav Suchý

Here comes January edition of resources running in AWS. It's a snapshot of 
resources running today.

For the first time, AMIs and Snapshots are included:

FedoraGroup: Not tagged
  Region: ap-south-2
# of AMIs: 222
Snapshots: 2220 GB in 222 snapshots
  Region: ap-south-1
# of AMIs: 2008
Snapshots: 17419 GB in 2297 snapshots
  Region: eu-south-1
# of AMIs: 710
Snapshots: 9804 GB in 997 snapshots
  Region: eu-south-2
# of AMIs: 222
Snapshots: 2220 GB in 222 snapshots
  Region: me-central-1
# of AMIs: 278
Snapshots: 2780 GB in 278 snapshots
  Region: il-central-1
# of AMIs: 202
Snapshots: 2120 GB in 212 snapshots
  Region: ca-central-1
# of AMIs: 2009
Snapshots: 15398 GB in 2301 snapshots
  Region: eu-central-1
# of AMIs: 6128
Snapshots: 39994 GB in 6417 snapshots
  Region: eu-central-2
# of AMIs: 222
Snapshots: 2220 GB in 222 snapshots
  Region: us-west-1
# of AMIs: 6171
Snapshots: 40309 GB in 6468 snapshots
  Region: us-west-2
# of AMIs: 6156
Snapshots: 40241 GB in 6457 snapshots
  Region: af-south-1
# of AMIs: 727
Snapshots: 9941 GB in 1015 snapshots
  Region: eu-north-1
# of AMIs: 752
Snapshots: 10148 GB in 1040 snapshots
  Region: eu-west-3
# of AMIs: 759
Snapshots: 10174 GB in 1046 snapshots
  Region: eu-west-2
# of AMIs: 2051
Snapshots: 15552 GB in 2340 snapshots
  Region: eu-west-1
# of AMIs: 6102
Snapshots: 39978 GB in 6404 snapshots
  Region: ap-northeast-3
# of AMIs: 531
Snapshots: 5302 GB in 531 snapshots
  Region: ap-northeast-2
# of AMIs: 2009
Snapshots: 15370 GB in 2300 snapshots
  Region: me-south-1
# of AMIs: 747
Snapshots: 10100 GB in 1034 snapshots
  Region: ap-northeast-1
# of AMIs: 6146
Snapshots: 41169 GB in 6605 snapshots
  Region: sa-east-1
# of AMIs: 6134
Snapshots: 40128 GB in 6436 snapshots
  Region: ap-east-1
# of AMIs: 748
Snapshots: 10120 GB in 1036 snapshots
  Region: ca-west-1
# of AMIs: 28
Snapshots: 280 GB in 28 snapshots
  Region: ap-southeast-1
Instance Type: c5n.2xlarge - Count: 1
Volume Type: gp3 - Total Size: 50 GiB
# of AMIs: 6117
Snapshots: 42061 GB in 6414 snapshots
  Region: ap-southeast-2
# of AMIs: 6119
Snapshots: 48573 GB in 6423 snapshots
  Region: ap-southeast-3
# of AMIs: 420
Snapshots: 4200 GB in 420 snapshots
  Region: ap-southeast-4
# of AMIs: 114
Snapshots: 1140 GB in 114 snapshots
  Region: us-east-1
Instance Type: c6a.2xlarge - Count: 3
Volume Type: gp3 - Total Size: 1564 GiB
Volume Type: gp2 - Total Size: 548 GiB
# of AMIs: 9973
Snapshots: 193883 GB in 10490 snapshots
  Region: us-east-2
Instance Type: c5.2xlarge - Count: 3
Volume Type: gp3 - Total Size: 516 GiB
Volume Type: gp2 - Total Size: 1032 GiB
# of AMIs: 1722
Snapshots: 121151 GB in 2485 snapshots

FedoraGroup: copr
  Region: us-east-1
Instance Type: t3a.medium - Count: 3
Instance Type: r5a.large - Count: 1
Instance Type: t3a.small - Count: 1
Instance Type: r7a.xlarge - Count: 1
Instance Type: c7a.4xlarge - Count: 1
Instance Type: m5a.4xlarge - Count: 1
Instance Type: t3a.2xlarge - Count: 1
Instance Type: c7i.xlarge - Count: 404
Instance Type: c7g.xlarge - Count: 193
Volume Type: st1 - Total Size: 7000 GiB
Volume Type: gp3 - Total Size: 11624 GiB
Volume Type: io2 - Total Size: 20 GiB
Volume Type: sc1 - Total Size: 81652 GiB

FedoraGroup: abrt
  Region: us-east-1
Instance Type: t3a.2xlarge - Count: 1
Volume Type: gp3 - Total Size: 10 GiB

FedoraGroup: centos
  Region: ap-south-1
Instance Type: t3.2xlarge - Count: 3
Instance Type: t3.large - Count: 1
Instance Type: m6i.2xlarge - Count: 1
Volume Type: gp3 - Total Size: 4570 GiB
  Region: eu-central-1
Instance Type: t2.small - Count: 1
Instance Type: t2.large - Count: 2
Instance Type: r5b.8xlarge - Count: 1
Instance Type: m6i.2xlarge - Count: 1
Instance Type: t3.xlarge - Count: 1
Volume Type: gp3 - Total Size: 13200 GiB
  Region: us-west-2
Instance Type: m6i.2xlarge - Count: 2
Instance Type: t3.small - Count: 1
Instance Type: t3.xlarge - Count: 1
Volume Type: gp3 - Total Size: 13145 GiB
  Region: af-south-1
Instance Type: m6i.2xlarge - Count: 1
Volume Type: gp3 - Total Size: 4050 GiB
  Region: eu-west-3
Instance Type: t2.large - Count: 2
Instance Type: t2.2xlarge - Count: 1
Instance Type: t3a.xlarge - Count: 1
Instance Type: t2.small - Count: 2

Untagged resources in AWS

2024-03-01 Thread Miroslav Suchý

This is without AMIs and Snapshots that still produce looong list.

Region: ap-southeast-1
Instances: (name, id, owner)
 * proxy38 (i-0a1ee820c765d573c, N/A)
Volumes - [id name (attached to instance, owner)]:
 * vol-0cbc4cc3e8cab429f N/A (proxy38, N/A)

Region: us-east-1
Instances: (name, id, owner)
 * N/A (i-002b91fad28adbbd8, AutoScaling)
 * N/A (i-029f1fd58efc46c62, AutoScaling)
 * N/A (i-0377c4527e994c5f1, AutoScaling)
Volumes - [id name (attached to instance, owner)]:
 * vol-052687f8fc3b09bc6 
kubernetes-dynamic-pvc-c0f128bf-d91e-4f5f-8c2c-ad169c7811a1 (N/A, 
1698424739242419174)
 * vol-0db7472b86bc75c0d 
testing-farm-staging-dynamic-pvc-3b8e8c59-79c2-45cb-b095-6f8976dd8ad9 (N/A, 
i-02deca5894e37e763)
 * vol-08d6d089c71689be0 
testing-farm-staging-dynamic-pvc-fe268915-6619-4b57-b643-9dedb91686cc (N/A, 
i-02deca5894e37e763)
 * vol-05f2d4ffcb1d6708a 
testing-farm-staging-dynamic-pvc-f2e08d70-785c-45ad-b387-a8b97bbcd996 (N/A, 
i-0bdbc9b1dd2b92bb7)
 * vol-036333a0a4cb38717 
testing-farm-staging-dynamic-pvc-7452b624-d6d0-4db9-8c05-b46662a6fd2e (N/A, 
i-0bdbc9b1dd2b92bb7)
 * vol-0a00f64279cae3e91 
testing-farm-staging-dynamic-pvc-13c2885c-67f1-4af3-93c6-b1cfa9312907 (N/A, 
i-0647c2571ea378711)
 * vol-0835dcd77e77790ad 
testing-farm-staging-dynamic-pvc-23419a97-cee0-4191-90b5-d10eb48ef527 (N/A, 
i-0647c2571ea378711)
 * vol-08f137aab87ab99ba 
testing-farm-staging-dynamic-pvc-54723197-b84c-4396-ba77-f1b1e29841f1 (N/A, 
i-00c59b26ed8cd6e33)
 * vol-07baa567c53a92ac9 
testing-farm-staging-dynamic-pvc-bbb2827a-c0f0-4d8f-bf1c-fe380f888916 (N/A, 
i-00c59b26ed8cd6e33)
 * vol-0d563a307b69275a4 
testing-farm-staging-dynamic-pvc-cfdbb6e1-f8f0-4fc9-8377-17e074d9f652 (N/A, 
i-0db3ea8ad4268b8f7)
 * vol-00ed882e7434da7c2 
testing-farm-staging-dynamic-pvc-d80f8118-a0fd-4c5a-ab10-69021532f3cc (N/A, 
i-0db3ea8ad4268b8f7)
 * vol-07ef2696f64eca3e4 N/A (, AutoScaling)
 * vol-04872a5e8007defdf testing-farm-production-02-dynamic-pvc-1f4758fc-88a7-4200-80d2-a38330a531ee (N/A, 
i-042b55378a5d2d2bc)
 * vol-034a0fab73efebb14 testing-farm-production-02-dynamic-pvc-fc894aa6-6b15-435c-8bbd-1a63c1c6fd3f (N/A, 
i-07ae8bf089c014f74)
 * vol-072635ac62b1d563b testing-farm-production-02-dynamic-pvc-aa66fa4f-d57b-433c-9ef4-f74ac67e13a6 (N/A, 
i-07ae8bf089c014f74)

 * vol-08837b9e8065f3ba9 N/A (, AutoScaling)
 * vol-0b54269f866bdb9ac 
testing-farm-production-dynamic-pvc-e23318b2-820f-4225-a6bf-93f9235a2c44 (, 
i-002b91fad28adbbd8)
 * vol-0e1c648e79153a998 
testing-farm-staging-dynamic-pvc-451e7361-7c58-41d1-9dd5-8b75b4cbd150 (N/A, 
i-0a67e06b991868827)
 * vol-03a736af9a4244a86 
testing-farm-staging-dynamic-pvc-cb13f8bc-fa89-47e0-be38-9461d1749792 (N/A, 
i-0a67e06b991868827)
 * vol-0683ebc07a920dca7 
testing-farm-staging-dynamic-pvc-84c19b77-081f-4ab1-89d9-de8f0a19209b (N/A, 
i-088625e35e83fd5e2)
 * vol-0c6f89d67ba46ead8 
testing-farm-staging-dynamic-pvc-0b94b80d-e4ed-4dae-8d6c-48fa9c87ebe0 (N/A, 
i-088625e35e83fd5e2)
 * vol-03d2a579a524b926e 
testing-farm-staging-dynamic-pvc-cc116c00-5508-493d-9b65-f99ad1ae0a2d (N/A, 
i-088625e35e83fd5e2)
 * vol-043ff9903a82867a1 
testing-farm-staging-dynamic-pvc-a805cb91-d435-4087-a022-b6c163953b77 (N/A, 
i-088625e35e83fd5e2)
 * vol-0421a320e41736a22 
testing-farm-staging-dynamic-pvc-5c0ff3f8-72a5-4ace-b67a-840cae72d19e (N/A, 
i-088625e35e83fd5e2)
 * vol-0fcfffcef3379f2fb 
testing-farm-staging-dynamic-pvc-01141aee-de71-4019-86c9-05614d1b9910 (N/A, 
i-088625e35e83fd5e2)
 * vol-0bdfc560e445a1ca0 
testing-farm-staging-dynamic-pvc-f32da186-4d88-45b9-9241-965ec3c667ae (N/A, 
i-088625e35e83fd5e2)
 * vol-0cfb615f6df385ec9 
testing-farm-staging-dynamic-pvc-2690e9c5-a1d6-4fce-9a4f-e15f107d596a (N/A, 
i-088625e35e83fd5e2)
 * vol-08e2039df396915ca 
testing-farm-staging-dynamic-pvc-45f52ccd-7db3-4409-8b60-a797dcb803ca (N/A, 
i-088625e35e83fd5e2)
 * vol-0a124a343d41f4897 
testing-farm-staging-dynamic-pvc-ea887ba0-7fc4-4729-b175-9d23f3f67e87 (N/A, 
i-088625e35e83fd5e2)
 * vol-01cbe75dda3583e65 
testing-farm-staging-dynamic-pvc-8bf6ff0a-34ff-492d-97c1-1081105d12b8 (N/A, 
i-088625e35e83fd5e2)
 * vol-06c4a3139d48e96f6 
testing-farm-staging-dynamic-pvc-54955031-3560-434e-9e33-9b9d55b66de1 (N/A, 
i-088625e35e83fd5e2)
 * vol-0944aae869650d726 
testing-farm-staging-dynamic-pvc-0cbf4046-e507-4dda-ae2a-a46439da18ab (N/A, 
i-088625e35e83fd5e2)
 * vol-00b3a1696f98cdeae 
testing-farm-staging-dynamic-pvc-852bd828-de88-4093-aaa7-0d8fa33e6964 (N/A, 
i-088625e35e83fd5e2)
 * vol-0a3824de4e6a63993 
testing-farm-staging-dynamic-pvc-46325ad3-985a-41c1-9baa-5ff001de781d (N/A, 
i-088625e35e83fd5e2)
 * vol-01d4f439abfbf8710 
testing-farm-staging-dynamic-pvc-b1e6a99a-23e2-498e-a11c-54e914add65b (N/A, 
i-088625e35e83fd5e2)
 * vol-031bd4c8f17c3c5f9 
testing-farm-staging-dynamic-pvc-d9f83744-46e0-4c42-ba55-6cba09b0da43 (N/A, 
i-088625e35e83fd5e2)
 * vol-07ddbd814ee4ace30 
testing-farm-staging-dynamic-pvc-4efd1c16-c160-4578-a27f-e6e78c3aea02 (N/A, 
i-088625e35e83fd5e2)
 * vol-0d184fb5b2521c7c3 

AWS usage per group (May)

2024-06-03 Thread Miroslav Suchý

Here comes May edition of resources running in AWS. It's a snapshot of 
resources running today.


FedoraGroup: centos-stream-build
 Region: us-east-2
   Service Name:
   Instance Type: t2.micro - Count: 1

FedoraGroup: Not tagged
 Region: ap-south-1
   Service Name:
   # of AMIs: 1309
 Region: eu-south-1
   Service Name:
   # of AMIs: 100
 Region: il-central-1
   Service Name:
   # of AMIs: 83
 Region: ca-central-1
   Service Name:
   # of AMIs: 1323
 Region: eu-central-1
   Service Name:
   # of AMIs: 1345
 Region: us-west-1
   Service Name:
   # of AMIs: 1351
 Region: us-west-2
   Service Name:
   # of AMIs: 1346
 Region: af-south-1
   Service Name:
   # of AMIs: 102
 Region: eu-north-1
   Service Name:
   # of AMIs: 101
 Region: eu-west-3
   Service Name:
   # of AMIs: 102
 Region: eu-west-2
   Service Name:
   # of AMIs: 1361
 Region: eu-west-1
   Service Name:
   # of AMIs: 1315
 Region: ap-northeast-2
   Service Name:
   Instance Type: c5n.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 100 GiB
   # of AMIs: 1310
 Region: me-south-1
   Service Name:
   # of AMIs: 100
 Region: ap-northeast-1
   Service Name:
   # of AMIs: 1348
 Region: sa-east-1
   Service Name:
   # of AMIs: 1327
 Region: ap-east-1
   Service Name:
   # of AMIs: 101
 Region: ap-southeast-1
   Service Name:
   # of AMIs: 1333
 Region: ap-southeast-2
   Service Name:
   # of AMIs: 1324
 Region: us-east-1
   Service Name:
   Instance Type: c6a.2xlarge - Count: 4
   Volume Type: gp3 - Total Size: 1882 GiB
   Volume Type: gp2 - Total Size: 1000 GiB
   # of AMIs: 1590
 Region: us-east-2
   Service Name:
   Instance Type: c5.2xlarge - Count: 5
   Instance Type: p3.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 748 GiB
   Volume Type: gp2 - Total Size: 2050 GiB
   # of AMIs: 847
   Service Name: Artemis
   # of AMIs: 166

FedoraGroup: pyai
 Region: us-east-1
   Service Name:
   Instance Type: t2.large - Count: 1
   Instance Type: i3.16xlarge - Count: 1

FedoraGroup: ga-archives
 Region: ap-south-1
   Service Name:
   # of AMIs: 47
 Region: eu-south-1
   Service Name:
   # of AMIs: 6
 Region: il-central-1
   Service Name:
   # of AMIs: 4
 Region: ca-central-1
   Service Name:
   # of AMIs: 47
 Region: eu-central-1
   Service Name:
   # of AMIs: 69
 Region: us-west-1
   Service Name:
   # of AMIs: 69
 Region: us-west-2
   Service Name:
   # of AMIs: 69
 Region: af-south-1
   Service Name:
   # of AMIs: 8
 Region: eu-north-1
   Service Name:
   # of AMIs: 6
 Region: eu-west-3
   Service Name:
   # of AMIs: 12
 Region: eu-west-2
   Service Name:
   # of AMIs: 47
 Region: eu-west-1
   Service Name:
   # of AMIs: 69
 Region: ap-northeast-2
   Service Name:
   # of AMIs: 47
 Region: me-south-1
   Service Name:
   # of AMIs: 6
 Region: ap-northeast-1
   Service Name:
   # of AMIs: 69
 Region: sa-east-1
   Service Name:
   # of AMIs: 69
 Region: ap-east-1
   Service Name:
   # of AMIs: 6
 Region: ap-southeast-1
   Service Name:
   # of AMIs: 69
 Region: ap-southeast-2
   Service Name:
   # of AMIs: 69
 Region: us-east-1
   Service Name:
   # of AMIs: 69
 Region: us-east-2
   Service Name:
   # of AMIs: 47

FedoraGroup: abrt
 Region: us-east-1
   Service Name:
   Instance Type: t3a.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 10 GiB

FedoraGroup: respins
 Region: us-east-1
   Service Name:
   Volume Type: gp3 - Total Size: 500 GiB

FedoraGroup: infra
 Region: ap-south-1
   Service Name:
   Instance Type: c5d.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 8 GiB
 Region: eu-central-1
   Service Name:
   Instance Type: c5d.xlarge - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Instance Type: m5.xlarge - Count: 1
   Instance Type: m6gd.4xlarge - Count: 1
   Volume Type: gp3 - Total Size: 16223 GiB
 Region: us-west-1
   Service Name:
   Instance Type: t3.medium - Count: 1
   Instance Type: t2.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 120 GiB
 Region: us-west-2
   Service Name:
   Instance Type: m5.large - Count: 3
   Instance Type: m6g.large - Count: 1
   Instance Type: c5n.2xlarge - Count: 1
   Instance Type: c6gd.xlarge - Count: 1
   Volume Type: standard - Total Size: 100 GiB
   Volume Type: gp3 - Total Size: 700 GiB
 Region: af-south-1
   Service Name:
   Instance Type: c5d.xlarge - Count: 2
   Volume Type: gp3 - Total Size: 25 GiB
 Region: eu-west-2
   Service Name:
   Instance Type: c5d.xlarge - Count: 1
   Volume Type: gp3 - Total Size: 20 GiB
 Region: eu-west-1
   Service Name:
   Instance Type: m4.10xlarge - Count: 1
   Volume Type: gp3 - Total Size: 20 GiB
 Region: ap-northeast-2
   Service Name:
   Volume Type: gp3 - Total Size: 190 GiB
 Region: sa-east-1
   Service Name:
   

Untagged resources in AWS

2024-06-03 Thread Miroslav Suchý

This is without AMIs and Snapshots that still produce looong list.

If you are an owner add `FedoraGroup` tag to this resource. See 
https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/aws-access/#_role_and_user_policies


Region: ap-northeast-2
Instances: (name, id, owner)
 * proxy31.fedoraproject.org (i-08603773299b48979, N/A)
Volumes - [id name (attached to instance, owner)]:
 * vol-07743f40260323360 N/A (proxy31.fedoraproject.org, N/A)

Region: us-east-1
Instances: (name, id, owner)
 * N/A (i-002b91fad28adbbd8, AutoScaling)
 * N/A (i-00916c7430ecfc311, AutoScaling)
 * N/A (i-077faed4d95201d57, AutoScaling)
 * N/A (i-0377c4527e994c5f1, AutoScaling)
Volumes - [id name (attached to instance, owner)]:
 * vol-0db7472b86bc75c0d 
testing-farm-staging-dynamic-pvc-3b8e8c59-79c2-45cb-b095-6f8976dd8ad9 (N/A, 
i-02deca5894e37e763)
 * vol-0a3824de4e6a63993 
testing-farm-staging-dynamic-pvc-46325ad3-985a-41c1-9baa-5ff001de781d (N/A, 
i-088625e35e83fd5e2)
 * vol-0d05c083194d428c4 
testing-farm-staging-dynamic-pvc-38b3bb56-77fe-4f0d-bc3c-68f6346f59a1 (N/A, 
i-088625e35e83fd5e2)
 * vol-0a124a343d41f4897 
testing-farm-staging-dynamic-pvc-ea887ba0-7fc4-4729-b175-9d23f3f67e87 (N/A, 
i-088625e35e83fd5e2)
 * vol-053fa81b8b6125326 N/A (, AutoScaling)
 * vol-087fb62c76e441427 
testing-farm-staging-dynamic-pvc-c771d3d5-b385-4d06-911a-19b2dc8928f6 (N/A, 
i-0c03d0d831e82bc58)
 * vol-0e64841c94a26cf38 
testing-farm-staging-dynamic-pvc-6ed1752f-3c7b-44f7-b22d-c7be684b8759 (N/A, 
i-0c03d0d831e82bc58)
 * vol-04872a5e8007defdf testing-farm-production-02-dynamic-pvc-1f4758fc-88a7-4200-80d2-a38330a531ee (N/A, 
i-042b55378a5d2d2bc)

 * vol-08d6d089c71689be0 
testing-farm-staging-dynamic-pvc-fe268915-6619-4b57-b643-9dedb91686cc (N/A, 
i-02deca5894e37e763)
 * vol-0166dc69cbf55b768 N/A (pyai-runner, bstinson)
 * vol-02c194c2c49da948b testing-farm-production-02-dynamic-pvc-21bc14f2-5f3a-422d-b9fd-59b4be28ecc5 (N/A, 
i-042b55378a5d2d2bc)

 * vol-0d184fb5b2521c7c3 
testing-farm-staging-dynamic-pvc-c1113599-ef37-43cc-9e72-d0514e953dd1 (N/A, 
i-088625e35e83fd5e2)
 * vol-0a7cb95bc5e00a5c3 
testing-farm-staging-dynamic-pvc-9b63024c-5ca4-41b5-9956-52c263752376 (N/A, 
i-088625e35e83fd5e2)
 * vol-063201a8440a48358 
testing-farm-staging-dynamic-pvc-becfca6a-ee25-4c1d-9dad-b0c0aa894d08 (N/A, 
i-0c03d0d831e82bc58)
 * vol-091343da48c9b02f3 
testing-farm-staging-dynamic-pvc-289164e5-9281-4a15-924e-2d0c8b98f680 (N/A, 
i-0e22c554e10c5a7cd)
 * vol-0d563a307b69275a4 
testing-farm-staging-dynamic-pvc-cfdbb6e1-f8f0-4fc9-8377-17e074d9f652 (N/A, 
i-0db3ea8ad4268b8f7)
 * vol-00af65a6e23a37d98 
testing-farm-staging-dynamic-pvc-4e3fc080-5b57-4816-84f7-2a66ee1f090a (N/A, 
i-088625e35e83fd5e2)
 * vol-08f137aab87ab99ba 
testing-farm-staging-dynamic-pvc-54723197-b84c-4396-ba77-f1b1e29841f1 (N/A, 
i-00c59b26ed8cd6e33)
 * vol-0aa62b2cf56b41c17 
testing-farm-staging-dynamic-pvc-7085501a-8be9-4319-8f3d-8dc0e93fcf65 (N/A, 
i-088625e35e83fd5e2)
 * vol-072635ac62b1d563b testing-farm-production-02-dynamic-pvc-aa66fa4f-d57b-433c-9ef4-f74ac67e13a6 (N/A, 
i-07ae8bf089c014f74)

 * vol-00ed882e7434da7c2 
testing-farm-staging-dynamic-pvc-d80f8118-a0fd-4c5a-ab10-69021532f3cc (N/A, 
i-0db3ea8ad4268b8f7)
 * vol-0c2052b891f555afd 
testing-farm-production-dynamic-pvc-1137ccdb-e63b-4e05-aa6b-067fdcc5dedb (, 
i-002b91fad28adbbd8)
 * vol-034a0fab73efebb14 testing-farm-production-02-dynamic-pvc-fc894aa6-6b15-435c-8bbd-1a63c1c6fd3f (N/A, 
i-07ae8bf089c014f74)

 * vol-0ef8adb84361dd745 
testing-farm-staging-dynamic-pvc-0a53c49c-4e09-49d6-b09a-2be2dacbb733 (N/A, 
i-0e8b9392dfcf85645)
 * vol-08837b9e8065f3ba9 N/A (, AutoScaling)
 * vol-03a874eda191ef53c N/A (, AutoScaling)
 * vol-03000a92e2331c690 
testing-farm-staging-dynamic-pvc-5a0cabf6-d646-40a0-94a8-88351991e931 (N/A, 
i-092a3cd1ba0adf7ba)
 * vol-0835dcd77e77790ad 
testing-farm-staging-dynamic-pvc-23419a97-cee0-4191-90b5-d10eb48ef527 (N/A, 
i-0647c2571ea378711)
 * vol-0600d0056269d0d9e 
testing-farm-staging-dynamic-pvc-b31ff07c-488b-481b-b8cd-11a47e48240f (N/A, 
i-0c03d0d831e82bc58)
 * vol-0a00f64279cae3e91 
testing-farm-staging-dynamic-pvc-13c2885c-67f1-4af3-93c6-b1cfa9312907 (N/A, 
i-0647c2571ea378711)
 * vol-069087892ed9fcf24 
testing-farm-staging-dynamic-pvc-0ad9b403-8788-4344-af34-de2f999d13ee (N/A, 
i-05ed8c7228971810e)
 * vol-01cbe75dda3583e65 
testing-farm-staging-dynamic-pvc-8bf6ff0a-34ff-492d-97c1-1081105d12b8 (N/A, 
i-088625e35e83fd5e2)
 * vol-03eeca1a8d9e066b8 N/A (pyai.fedorainfracloud.org, bstinson)
 * vol-0d1b0f48fc66ba385 
testing-farm-staging-dynamic-pvc-c56d8438-936e-4627-9377-7dfd2861ae81 (N/A, 
i-0c03d0d831e82bc58)
 * vol-0cfb615f6df385ec9 
testing-farm-staging-dynamic-pvc-2690e9c5-a1d6-4fce-9a4f-e15f107d596a (N/A, 
i-088625e35e83fd5e2)
 * vol-0c6f89d67ba46ead8 
testing-farm-staging-dynamic-pvc-0b94b80d-e4ed-4dae-8d6c-48fa9c87ebe0 (N/A, 
i-088625e35e83fd5e2)
 * vol-0944aae869650d726 
testing-farm-staging-dynamic-pvc-0cbf4046-e507-4dda-ae2a-a46439da18ab (N/A, 
i-088625e35e83fd5e2)
 * vol-058bde3340a757a60 

Re: Deleting old AMIs in AWS

2024-06-20 Thread Miroslav Suchý

Dne 14. 03. 24 v 9:58 dop. Miroslav Suchý napsal(a):
I waited till Freeze is over - just to be safe. And now I want to delete the old AMIs. Likely in several waves. Going 
from oldest to ~2021. 


I deregister all AMIs without FedoraGroup tag that are older than 2020-01-01. This was in all regions. Snapshots are 
still there.


I used script:

https://github.com/xsuchy/fedora-infra-scripts/blob/main/delete-old-amis.py

The script deregistered more than 8k AMIs (now tell me what I broke)

The full log is here (well almost full as I forgot to log it at the beggining 
so few hundreds of lines are lost)

https://k00.fr/w5cbxyi3

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Deleting old AMIs in AWS

2024-06-26 Thread Miroslav Suchý

Dne 20. 06. 24 v 1:55 odp. Miroslav Suchý napsal(a):


I deregister all AMIs without FedoraGroup tag that are older than 2020-01-01. This was in all regions. Snapshots are 
still there. 


I continued with all older than 2021-01-01.

1240 AMis deregistered.

The log is here: https://k00.fr/n2tx12di


--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
--
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: Deleting old AMIs in AWS

2024-06-30 Thread Miroslav Suchý

Dne 26. 06. 24 v 11:43 odp. Miroslav Suchý napsal(a):
I deregister all AMIs without FedoraGroup tag that are older than 2020-01-01. This was in all regions. Snapshots are 
still there. 


I continued with all older than 2021-01-01. 


I continued with all older than 2022-01-01.

797 AMIs deregistered.

The log is here: https://k00.fr/a5oqhiho


FYI - My next intent. Do one more run for AMIs older than 2023 in few days. Then I will pause (because PTOs. In August 
start deleting snapshots that were left behind AMIs.

And in September continue with deleting most recent AMIs.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
-- 
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Untagged resources in AWS

2024-06-30 Thread Miroslav Suchý

This is without AMIs and Snapshots that still produce looong list.

If you are an owner add `FedoraGroup` tag to this resource. See 
https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/aws-access/#_role_and_user_policies


Here is the list that includes untagged AMIs (that I am going to delete at the 
end of summer) https://k00.fr/otu5336b

Region: us-east-1
Instances: (name, id, owner)
  * N/A (i-002b91fad28adbbd8, AutoScaling)
  * N/A (i-0377c4527e994c5f1, AutoScaling)
  * N/A (i-07795bfd602f91dcc, AutoScaling)
  * N/A (i-06d1f88745b453a54, msuchy)   (Ouch, I am going to handle this)
  * N/A (i-00aca2609b5174092, AutoScaling)
Volumes - [id name (attached to instance, owner)]:
  * vol-0db7472b86bc75c0d 
testing-farm-staging-dynamic-pvc-3b8e8c59-79c2-45cb-b095-6f8976dd8ad9 (N/A, 
i-02deca5894e37e763)
  * vol-0cf341ddd30766033 N/A (, AutoScaling)
  * vol-0a3824de4e6a63993 
testing-farm-staging-dynamic-pvc-46325ad3-985a-41c1-9baa-5ff001de781d (N/A, 
i-088625e35e83fd5e2)
  * vol-0d05c083194d428c4 
testing-farm-staging-dynamic-pvc-38b3bb56-77fe-4f0d-bc3c-68f6346f59a1 (N/A, 
i-088625e35e83fd5e2)
  * vol-0098934cca12e30b4 N/A (qa-openqa-worker-5, dbrouwer)
  * vol-0a124a343d41f4897 
testing-farm-staging-dynamic-pvc-ea887ba0-7fc4-4729-b175-9d23f3f67e87 (N/A, 
i-088625e35e83fd5e2)
  * vol-087fb62c76e441427 
testing-farm-staging-dynamic-pvc-c771d3d5-b385-4d06-911a-19b2dc8928f6 (N/A, 
i-0c03d0d831e82bc58)
  * vol-0e64841c94a26cf38 
testing-farm-staging-dynamic-pvc-6ed1752f-3c7b-44f7-b22d-c7be684b8759 (N/A, 
i-0c03d0d831e82bc58)
  * vol-04872a5e8007defdf testing-farm-production-02-dynamic-pvc-1f4758fc-88a7-4200-80d2-a38330a531ee (N/A, 
i-042b55378a5d2d2bc)

  * vol-08d6d089c71689be0 
testing-farm-staging-dynamic-pvc-fe268915-6619-4b57-b643-9dedb91686cc (N/A, 
i-02deca5894e37e763)
  * vol-0ee3acd8dd016b2e2 N/A (license-scanner, bstinson)
  * vol-0166dc69cbf55b768 N/A (pyai-runner, bstinson)
  * vol-02c194c2c49da948b testing-farm-production-02-dynamic-pvc-21bc14f2-5f3a-422d-b9fd-59b4be28ecc5 (N/A, 
i-042b55378a5d2d2bc)

  * vol-0d184fb5b2521c7c3 
testing-farm-staging-dynamic-pvc-c1113599-ef37-43cc-9e72-d0514e953dd1 (N/A, 
i-088625e35e83fd5e2)
  * vol-0a7cb95bc5e00a5c3 
testing-farm-staging-dynamic-pvc-9b63024c-5ca4-41b5-9956-52c263752376 (N/A, 
i-088625e35e83fd5e2)
  * vol-063201a8440a48358 
testing-farm-staging-dynamic-pvc-becfca6a-ee25-4c1d-9dad-b0c0aa894d08 (N/A, 
i-0c03d0d831e82bc58)
  * vol-091343da48c9b02f3 
testing-farm-staging-dynamic-pvc-289164e5-9281-4a15-924e-2d0c8b98f680 (N/A, 
i-0e22c554e10c5a7cd)
  * vol-0d563a307b69275a4 
testing-farm-staging-dynamic-pvc-cfdbb6e1-f8f0-4fc9-8377-17e074d9f652 (N/A, 
i-0db3ea8ad4268b8f7)
  * vol-00af65a6e23a37d98 
testing-farm-staging-dynamic-pvc-4e3fc080-5b57-4816-84f7-2a66ee1f090a (N/A, 
i-088625e35e83fd5e2)
  * vol-08f137aab87ab99ba 
testing-farm-staging-dynamic-pvc-54723197-b84c-4396-ba77-f1b1e29841f1 (N/A, 
i-00c59b26ed8cd6e33)
  * vol-0aa62b2cf56b41c17 
testing-farm-staging-dynamic-pvc-7085501a-8be9-4319-8f3d-8dc0e93fcf65 (N/A, 
i-088625e35e83fd5e2)
  * vol-072635ac62b1d563b testing-farm-production-02-dynamic-pvc-aa66fa4f-d57b-433c-9ef4-f74ac67e13a6 (N/A, 
i-07ae8bf089c014f74)

  * vol-00ed882e7434da7c2 
testing-farm-staging-dynamic-pvc-d80f8118-a0fd-4c5a-ab10-69021532f3cc (N/A, 
i-0db3ea8ad4268b8f7)
  * vol-0c2052b891f555afd 
testing-farm-production-dynamic-pvc-1137ccdb-e63b-4e05-aa6b-067fdcc5dedb (, 
i-002b91fad28adbbd8)
  * vol-034a0fab73efebb14 testing-farm-production-02-dynamic-pvc-fc894aa6-6b15-435c-8bbd-1a63c1c6fd3f (N/A, 
i-07ae8bf089c014f74)

  * vol-0ef8adb84361dd745 
testing-farm-staging-dynamic-pvc-0a53c49c-4e09-49d6-b09a-2be2dacbb733 (N/A, 
i-0e8b9392dfcf85645)
  * vol-08837b9e8065f3ba9 N/A (, AutoScaling)
  * vol-03055e160f349a039 N/A (, msuchy) (Ouch, I am going to handle this)
  * vol-03000a92e2331c690 
testing-farm-staging-dynamic-pvc-5a0cabf6-d646-40a0-94a8-88351991e931 (N/A, 
i-092a3cd1ba0adf7ba)
  * vol-0835dcd77e77790ad 
testing-farm-staging-dynamic-pvc-23419a97-cee0-4191-90b5-d10eb48ef527 (N/A, 
i-0647c2571ea378711)
  * vol-0600d0056269d0d9e 
testing-farm-staging-dynamic-pvc-b31ff07c-488b-481b-b8cd-11a47e48240f (N/A, 
i-0c03d0d831e82bc58)
  * vol-0a00f64279cae3e91 
testing-farm-staging-dynamic-pvc-13c2885c-67f1-4af3-93c6-b1cfa9312907 (N/A, 
i-0647c2571ea378711)
  * vol-069087892ed9fcf24 
testing-farm-staging-dynamic-pvc-0ad9b403-8788-4344-af34-de2f999d13ee (N/A, 
i-05ed8c7228971810e)
  * vol-01cbe75dda3583e65 
testing-farm-staging-dynamic-pvc-8bf6ff0a-34ff-492d-97c1-1081105d12b8 (N/A, 
i-088625e35e83fd5e2)
  * vol-03eeca1a8d9e066b8 N/A (pyai.fedorainfracloud.org, bstinson)
  * vol-0d1b0f48fc66ba385 
testing-farm-staging-dynamic-pvc-c56d8438-936e-4627-9377-7dfd2861ae81 (N/A, 
i-0c03d0d831e82bc58)
  * vol-0cfb615f6df385ec9 
testing-farm-staging-dynamic-pvc-2690e9c5-a1d6-4fce-9a4f-e15f107d596a (N/A, 
i-088625e35e83fd5e2)
  * vol-0c6f89d67ba46ead8 
testing-farm-staging-dynamic-pvc-0b94b80d-e4ed-4dae-8d6c-48fa9c87ebe0 (N/A, 

AWS usage per group (June)

2024-07-01 Thread Miroslav Suchý

Here comes June edition of resources running in AWS. It's a snapshot of 
resources running today.

I see some gp2 usage there so I want to remind you that gp2 is slower **and** expensive compared to gp3. So unless it is 
rootfs it does not make sense to allocate it.




FedoraGroup: qa
 Region: us-east-1
   Service Name:
   Instance Type: c5n.metal - Count: 6
   Instance Type: c6a.8xlarge - Count: 1
   Instance Type: i4i.32xlarge - Count: 2
   Instance Type: c6in.metal - Count: 4
   Volume Type: gp3 - Total Size: 7000 GiB
   Volume Type: io1 - Total Size: 5000 GiB

FedoraGroup: coreos
 Region: ap-south-2
   Service Name:
   # of AMIs: 280
 Region: ap-south-1
   Service Name:
   # of AMIs: 700
 Region: eu-south-1
   Service Name:
   # of AMIs: 659
 Region: eu-south-2
   Service Name:
   # of AMIs: 280
 Region: me-central-1
   Service Name:
   # of AMIs: 336
 Region: il-central-1
   Service Name:
   # of AMIs: 172
 Region: ca-central-1
   Service Name:
   # of AMIs: 700
 Region: eu-central-1
   Service Name:
   # of AMIs: 700
 Region: eu-central-2
   Service Name:
   # of AMIs: 280
 Region: us-west-1
   Service Name:
   # of AMIs: 700
 Region: us-west-2
   Service Name:
   # of AMIs: 700
 Region: af-south-1
   Service Name:
   # of AMIs: 672
 Region: eu-north-1
   Service Name:
   # of AMIs: 700
 Region: eu-west-3
   Service Name:
   # of AMIs: 700
 Region: eu-west-2
   Service Name:
   # of AMIs: 700
 Region: eu-west-1
   Service Name:
   # of AMIs: 700
 Region: ap-northeast-3
   Service Name:
   # of AMIs: 589
 Region: ap-northeast-2
   Service Name:
   # of AMIs: 700
 Region: me-south-1
   Service Name:
   # of AMIs: 696
 Region: ap-northeast-1
   Service Name:
   # of AMIs: 700
 Region: sa-east-1
   Service Name:
   # of AMIs: 700
 Region: ap-east-1
   Service Name:
   # of AMIs: 696
 Region: ca-west-1
   Service Name:
   # of AMIs: 86
 Region: ap-southeast-1
   Service Name:
   # of AMIs: 700
 Region: ap-southeast-2
   Service Name:
   # of AMIs: 700
 Region: ap-southeast-3
   Service Name:
   # of AMIs: 478
 Region: ap-southeast-4
   Service Name:
   # of AMIs: 172
 Region: us-east-1
   Service Name:
   # of AMIs: 4502
 Region: us-east-2
   Service Name:
   # of AMIs: 700

FedoraGroup: copr
 Region: us-east-1
   Service Name:
   Instance Type: t3a.medium - Count: 3
   Instance Type: r5a.large - Count: 1
   Instance Type: t3a.small - Count: 1
   Instance Type: r7a.xlarge - Count: 1
   Instance Type: c7a.4xlarge - Count: 1
   Instance Type: m5a.4xlarge - Count: 1
   Instance Type: t3a.2xlarge - Count: 1
   Instance Type: c7i.xlarge - Count: 51
   Instance Type: c7g.xlarge - Count: 112
   Instance Type: c7a.large - Count: 1
   Volume Type: gp3 - Total Size: 10160 GiB
   Volume Type: sc1 - Total Size: 81652 GiB
   Volume Type: st1 - Total Size: 7500 GiB
   Volume Type: io2 - Total Size: 20 GiB
   # of AMIs: 5

FedoraGroup: abrt
 Region: us-east-1
   Service Name:
   Instance Type: t3a.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 10 GiB

FedoraGroup: pyai
 Region: us-east-1
   Service Name:
   Instance Type: t2.large - Count: 1
   Instance Type: i3.16xlarge - Count: 1

FedoraGroup: Not tagged
 Region: ap-south-1
   Service Name:
   # of AMIs: 328
 Region: eu-south-1
   Service Name:
   # of AMIs: 90
 Region: il-central-1
   Service Name:
   # of AMIs: 87
 Region: ca-central-1
   Service Name:
   # of AMIs: 326
 Region: eu-central-1
   Service Name:
   # of AMIs: 336
 Region: us-west-1
   Service Name:
   # of AMIs: 339
 Region: us-west-2
   Service Name:
   # of AMIs: 341
 Region: af-south-1
   Service Name:
   # of AMIs: 90
 Region: eu-north-1
   Service Name:
   # of AMIs: 90
 Region: eu-west-3
   Service Name:
   # of AMIs: 90
 Region: eu-west-2
   Service Name:
   # of AMIs: 340
 Region: eu-west-1
   Service Name:
   # of AMIs: 333
 Region: ap-northeast-2
   Service Name:
   # of AMIs: 328
 Region: me-south-1
   Service Name:
   # of AMIs: 90
 Region: ap-northeast-1
   Service Name:
   # of AMIs: 328
 Region: sa-east-1
   Service Name:
   # of AMIs: 333
 Region: ap-east-1
   Service Name:
   # of AMIs: 90
 Region: ap-southeast-1
   Service Name:
   # of AMIs: 334
 Region: ap-southeast-2
   Service Name:
   # of AMIs: 331
 Region: us-east-1
   Service Name:
   Instance Type: c6a.2xlarge - Count: 4
   Volume Type: gp3 - Total Size: 2288 GiB
   Volume Type: gp2 - Total Size: 1000 GiB
   Volume Type: io1 - Total Size: 1000 GiB
   # of AMIs: 419
 Region: us-east-2
   Service Name:
   Instance Type: c5.2xlarge - Count: 9
   Instance Type: p3.2xlarge - Count: 1
   Instance Type: m5.2xlarge - Count: 1
   Volume Type: gp3 - Total Size: 1088 GiB
   Volume Type: gp2 - Total 

Re: Deleting old AMIs in AWS

2024-07-02 Thread Miroslav Suchý

Dne 30. 06. 24 v 6:52 odp. Miroslav Suchý napsal(a):

I continued with all older than 2022-01-01


I continued with all older than 2023-01-01.

1014 AMIs deregistered.

The log is here: https://k00.fr/opdxkbs9

4283 AMI without tag remains. I see on the list AMIs like:

* CPE RHEL 9.1
* CentOS Stream 9x86_64 20240624
* CentOS Stream 8 aarch64 20240429
* Fedora-Cloud-Base-39_Beta-1.1.aarch64-hvm-us-east-2-gp3-0
* Fedora-Cloud-Base-AmazonEC2.x86_64-40-20240622.0-hvm-us-east-2-gp3-0
* fedora-coreos-40.20240301.92.0-x86_64

Full list is here: https://k00.fr/5z8vge8p

If any of these is yours, please tag it with FedoraGroup otherwise it will be 
deleted at the end of summer.

--
Miroslav Suchy, RHCA
Red Hat, Manager, Packit and CPT, #brno, #fedora-buildsys
-- 
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


<    1   2   3