Messages by Date
-
2025/09/23
[ceph-users] Recheck for "large omap objects" warning
Jan Kasprzak
-
2025/09/23
[ceph-users] Re: CephFS shares with multiple access rights: Use subvolumes or multiple filesystems?
Eugen Block
-
2025/09/23
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Stefan Kooman
-
2025/09/23
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Niklas Hambüchen
-
2025/09/22
[ceph-users] tentacle 20.1.1 RC1 QE validation status
Yuri Weinstein
-
2025/09/22
[ceph-users] CephFS shares with multiple access rights: Use subvolumes or multiple filesystems?
Sophonet
-
2025/09/22
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Anthony D'Atri
-
2025/09/22
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Niklas Hambüchen
-
2025/09/22
[ceph-users] Re: radosgw multisite unable to initialize site config
Redouane Kachach
-
2025/09/22
[ceph-users] Re: Experience with mixed-arch clusters?
Alexander Schreiber
-
2025/09/22
[ceph-users] Re: Experience with mixed-arch clusters?
Redouane Kachach
-
2025/09/21
[ceph-users] Experience with mixed-arch clusters?
Alexander Schreiber
-
2025/09/21
[ceph-users] SSL cert expired for download.ceph.com
Boris
-
2025/09/20
[ceph-users] radosgw multisite unable to initialize site config
Kevin Hrpcek
-
2025/09/20
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Michel Jouvin
-
2025/09/20
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Eugen Block
-
2025/09/20
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Eugen Block
-
2025/09/20
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Alexander Patrakov
-
2025/09/20
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Alex from North
-
2025/09/20
[ceph-users] Scrub errors after OSD_TOO_MANY_REPAIRS: how to recover?
Michel Jouvin
-
2025/09/20
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
-
2025/09/19
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
-
2025/09/19
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Adrian Sevcenco
-
2025/09/19
[ceph-users] SQUID ganesha failures
Patrick Begou
-
2025/09/19
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Janne Johansson
-
2025/09/19
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Burkhard Linke
-
2025/09/19
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Niklas Hambüchen
-
2025/09/19
[ceph-users] Mirroring images per snapshot to a different pool on remote site
Emmanuel Lacour
-
2025/09/19
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Boris
-
2025/09/19
[ceph-users] Why does recovering objects take much longer than the outage that caused them?
Niklas Hambüchen
-
2025/09/18
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Janne Johansson
-
2025/09/18
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Adrian Sevcenco
-
2025/09/18
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
-
2025/09/18
[ceph-users] Re: Scrub errors after OSD_TOO_MANY_REPAIRS: how to recover?
Michel Jouvin
-
2025/09/18
[ceph-users] Re: Scrub errors after OSD_TOO_MANY_REPAIRS: how to recover?
Enrico Bocchi
-
2025/09/18
[ceph-users] Where/how to set PG auto repair/heal?
lejeczek
-
2025/09/18
[ceph-users] Re: Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Mikael Öhman
-
2025/09/18
[ceph-users] September Ceph Science User Group Virtual Meeting
Enrico Bocchi
-
2025/09/18
[ceph-users] Re: Per-RBD-image stats
Anthony D'Atri
-
2025/09/18
[ceph-users] Re: Per-RBD-image stats
Mika
-
2025/09/18
[ceph-users] Re: Reef CEPHADM_CHECK_PUBLIC_MEMBERSHIP false positive alerts on some hosts (IPv6 only cluster)
Stefan Kooman
-
2025/09/18
[ceph-users] Re: Scrub errors after OSD_TOO_MANY_REPAIRS: how to recover?
Janne Johansson
-
2025/09/18
[ceph-users] Re: Where/how to set PG auto repair/heal?
Eugen Block
-
2025/09/17
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Eugen Block
-
2025/09/17
[ceph-users] Re: [EXT] Re: Is there a faster way to merge PGs?
Justin Mammarella
-
2025/09/17
[ceph-users] Re: CephFS maturity on Squid?
Tim Holloway
-
2025/09/17
[ceph-users] Re: CephFS maturity on Squid?
Nigel Williams
-
2025/09/17
[ceph-users] Re: CephFS maturity on Squid?
Tim Holloway
-
2025/09/17
[ceph-users] Re: radosgw multisite unable to initialize site config
Eugen Block
-
2025/09/17
[ceph-users] 回复:Re: After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.
Best Regards
-
2025/09/17
[ceph-users] Re: cephfs volume mount error using both ceph-csi and mount.ceph (using -o mon_addr)
John Jasen
-
2025/09/17
[ceph-users] Async dirops maturity
William David Edwards
-
2025/09/17
[ceph-users] Re: CephFS maturity on Squid?
Eugen Block
-
2025/09/17
[ceph-users] capabilities needed for subvolume management?
Robert Sander
-
2025/09/17
[ceph-users] CephFS maturity on Squid?
William David Edwards
-
2025/09/17
[ceph-users] Re: Subject : Account-Level API Support in Ceph RGW for Production Use
Stephan Hohn
-
2025/09/17
[ceph-users] Re: Reef CEPHADM_CHECK_PUBLIC_MEMBERSHIP false positive alerts on some hosts (IPv6 only cluster)
Boris
-
2025/09/17
[ceph-users] Re: Reef CEPHADM_CHECK_PUBLIC_MEMBERSHIP false positive alerts on some hosts (IPv6 only cluster)
Stefan Kooman
-
2025/09/16
[ceph-users] Reef CEPHADM_CHECK_PUBLIC_MEMBERSHIP false positive alerts on some hosts (IPv6 only cluster)
Boris
-
2025/09/16
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Jan Kasprzak
-
2025/09/16
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Anthony D'Atri
-
2025/09/16
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Jan Kasprzak
-
2025/09/16
[ceph-users] Re: Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Michel Jouvin
-
2025/09/16
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Anthony D'Atri
-
2025/09/16
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Jan Kasprzak
-
2025/09/16
[ceph-users] advice needed for sizing of wal+db device for hdd
Adrian Sevcenco
-
2025/09/15
[ceph-users] Re: radosgw multisite unable to initialize site config
Kevin Hrpcek
-
2025/09/15
[ceph-users] cephfs volume mount error using both ceph-csi and mount.ceph (using -o mon_addr)
John Jasen
-
2025/09/15
[ceph-users] [ceph-user]ceph-ansible || Ceph upgrade from octopus to quincy
Danish Khan
-
2025/09/15
[ceph-users] Re: radosgw multisite unable to initialize site config
Kevin Hrpcek
-
2025/09/15
[ceph-users] Re: radosgw multisite unable to initialize site config
wissem mimouna
-
2025/09/15
[ceph-users] Re: radosgw multisite unable to initialize site config
Redouane Kachach
-
2025/09/15
[ceph-users] Re: radosgw multisite unable to initialize site config
Redouane Kachach
-
2025/09/15
[ceph-users] Re: radosgw multisite unable to initialize site config
Kevin Hrpcek
-
2025/09/14
[ceph-users] Re: Ceph orchestrator not refreshing device list
Bob Gibson
-
2025/09/14
[ceph-users] Re: Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Mikael Öhman
-
2025/09/14
[ceph-users] Re: Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Eugen Block
-
2025/09/14
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
-
2025/09/13
[ceph-users] How important is the "default" data pool being replicated for CephFS
Mikael Öhman
-
2025/09/13
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Soenke Schippmann
-
2025/09/13
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Anthony D'Atri
-
2025/09/13
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Joachim Kraftmayer
-
2025/09/13
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Eugen Block
-
2025/09/12
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Alex from North
-
2025/09/12
[ceph-users] v20.1.0 Tentacle RC0 released
Yuri Weinstein
-
2025/09/12
[ceph-users] Re: How important is the "default" data pool being replicated for CephFS
Frédéric Nass
-
2025/09/12
[ceph-users] Re: How important is the "default" data pool being replicated for CephFS
Eugen Block
-
2025/09/12
[ceph-users] Re: Is there a faster way to merge PGs?
Eugen Block
-
2025/09/12
[ceph-users] Re: Is there a faster way to merge PGs?
Eugen Block
-
2025/09/11
[ceph-users] Re: snap-schedule not running
Eugen Block
-
2025/09/11
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Eugen Block
-
2025/09/11
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Anthony D'Atri
-
2025/09/11
[ceph-users] Is there a faster way to merge PGs?
Justin Mammarella
-
2025/09/11
[ceph-users] Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
-
2025/09/11
[ceph-users] Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Mikael Öhman
-
2025/09/11
[ceph-users] Re: How important is the "default" data pool being replicated for CephFS
Anthony D'Atri
-
2025/09/11
[ceph-users] Re: OSD socket closed
Administrator
-
2025/09/11
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Anthony D'Atri
-
2025/09/11
[ceph-users] Re: snap-schedule not running
Sophonet
-
2025/09/10
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Ilya Dryomov
-
2025/09/10
[ceph-users] Re: OSD socket closed
Ilya Dryomov
-
2025/09/10
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Ilya Dryomov
-
2025/09/10
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Eugen Block
-
2025/09/10
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Anthony D'Atri
-
2025/09/10
[ceph-users] Re: OSD socket closed
Kelson White
-
2025/09/10
[ceph-users] OSD socket closed
Samuel Moya Tinoco
-
2025/09/10
[ceph-users] Reducing the OSD Heartbeat Grace & Interval
Alexander Hussein-Kershaw
-
2025/09/10
[ceph-users] Re: v20.1.0 Tentacle RC0 released
John Mulligan
-
2025/09/10
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Robert Sander
-
2025/09/10
[ceph-users] test
wissem mimouna
-
2025/09/10
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Robert Sander
-
2025/09/09
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Robert Sander
-
2025/09/09
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Anthony D'Atri
-
2025/09/09
[ceph-users] peer... is using msgr V1 protocol ?
lejeczek
-
2025/09/09
[ceph-users] Re: peer... is using msgr V1 protocol ?
lejeczek
-
2025/09/09
[ceph-users] Re: ceph restful key permissions
Pedro Gonzalez
-
2025/09/09
[ceph-users] Re: peer... is using msgr V1 protocol ?
Janne Johansson
-
2025/09/09
[ceph-users] [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Alex from North
-
2025/09/09
[ceph-users] ceph restful key permissions
BASSAGET Cédric
-
2025/09/08
[ceph-users] CSC Meeting Notes September 8th 2025
Adam King
-
2025/09/08
[ceph-users] Moving to Zoom for today's CDS session
Anthony Middleton
-
2025/09/08
[ceph-users] Re: After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.
Anthony D'Atri
-
2025/09/08
[ceph-users] After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.
Best Regards
-
2025/09/08
[ceph-users] Re: RadosGW read/list-all user across all tenants, users, buckets
wissem mimouna
-
2025/09/07
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Soenke Schippmann
-
2025/09/07
[ceph-users] Re: snap-schedule not running
Eugen Block
-
2025/09/07
[ceph-users] Question about shard placement in erasure code pools
Soeren Malchow
-
2025/09/07
[ceph-users] Re: nfs-ganesha, subvolumes and subtree checking?
Davíð Steinn Geirsson
-
2025/09/07
[ceph-users] Re: nfs-ganesha, subvolumes and subtree checking?
Eugen Block
-
2025/09/07
[ceph-users] Deploy haproxy.nfs
Thomas
-
2025/09/06
[ceph-users] RadosGW read/list-all user across all tenants, users, buckets
Jacques Hoffmann
-
2025/09/06
[ceph-users] Re: Question about shard placement in erasure code pools
Eugen Block
-
2025/09/06
[ceph-users] Re: nfs-ganesha, subvolumes and subtree checking?
Eugen Block
-
2025/09/05
[ceph-users] nfs-ganesha, subvolumes and subtree checking?
Davíð Steinn Geirsson
-
2025/09/05
[ceph-users] Re: Is there a preferred EC plugin?
Ryan Sleeth
-
2025/09/05
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Anthony D'Atri
-
2025/09/05
[ceph-users] Re: Append to EC pool object (docs vs. reality)
Anthony D'Atri
-
2025/09/05
[ceph-users] Append to EC pool object (docs vs. reality)
Jan Kasprzak
-
2025/09/04
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Igor Fedotov
-
2025/09/04
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Eugen Block
-
2025/09/04
[ceph-users] Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Soenke Schippmann
-
2025/09/04
[ceph-users] Re: Is there a preferred EC plugin?
Boris
-
2025/09/03
[ceph-users] Re: snap-schedule not running
Sophonet
-
2025/09/03
[ceph-users] Ceph Scrubs
Alex
-
2025/09/03
[ceph-users] Re: squid 19.2.2 - drive in use but ceph is seeing it as "available"
Steven Vacaroaia
-
2025/09/03
[ceph-users] Re: squid 19.2.2 - drive in use but ceph is seeing it as "available"
GLE, Vivien
-
2025/09/03
[ceph-users] squid 19.2.2 - drive in use but ceph is seeing it as "available"
Steven Vacaroaia
-
2025/09/03
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Steven Vacaroaia
-
2025/09/03
[ceph-users] Re: Is there a preferred EC plugin?
Stefan Kooman
-
2025/09/03
[ceph-users] Is there a preferred EC plugin?
Boris
-
2025/09/03
[ceph-users] Grafana: Ceph-Cluster Advanced "in" OSDs not counted correctly
Eugen Block
-
2025/09/03
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Steven Vacaroaia
-
2025/09/03
[ceph-users] squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file
Steven Vacaroaia
-
2025/09/03
[ceph-users] Re: squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file
Steven Vacaroaia
-
2025/09/02
[ceph-users] Re: squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file
Eugen Block
-
2025/09/02
[ceph-users] How to remove failed OSD & reuse it?
lejeczek
-
2025/09/02
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Anthony D'Atri
-
2025/09/02
[ceph-users] [CEPH-MDS] directory access blocked while looping subtree empty export
????
-
2025/09/02
[ceph-users] Librados async operations in C
Jan Kasprzak
-
2025/09/02
[ceph-users] Re: squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file
Steven Vacaroaia
-
2025/09/02
[ceph-users] Re: Grafana: Ceph-Cluster Advanced "in" OSDs not counted correctly
Eugen Block
-
2025/09/02
[ceph-users] Re: How to remove failed OSD & reuse it?
Eugen Block
-
2025/09/02
[ceph-users] Re: How to remove failed OSD & reuse it?
lejeczek
-
2025/09/02
[ceph-users] Re: How to remove failed OSD & reuse it?
Eugen Block
-
2025/09/02
[ceph-users] Re: How to remove failed OSD & reuse it?
lejeczek
-
2025/09/02
[ceph-users] Re: How to remove failed OSD & reuse it?
lejeczek
-
2025/09/02
[ceph-users] Re: How to remove failed OSD & reuse it?
lejeczek
-
2025/09/02
[ceph-users] Re: How to remove failed OSD & reuse it?
Eugen Block
-
2025/09/01
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Anthony D'Atri
-
2025/09/01
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Steven Vacaroaia
-
2025/09/01
[ceph-users] Re: snap-schedule not running
Eugen Block
-
2025/08/31
[ceph-users] Re: Debian Packages for Trixie
Andrew
-
2025/08/31
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Anthony D'Atri
-
2025/08/31
[ceph-users] Re: snap-schedule not running
Sophonet
-
2025/08/31
[ceph-users] squid 19.2.2 - added disk does not reflect in available capacity on pools
Steven Vacaroaia
-
2025/08/29
[ceph-users] snap-schedule not running
Sophonet
-
2025/08/29
[ceph-users] Re: ceph-users Digest, Vol 134, Issue 88
renjianxinlover
-
2025/08/29
[ceph-users] Re: Debian Packages for Trixie
Konold, Martin
-
2025/08/28
[ceph-users] Re: Debian Packages for Trixie
Daniel Baumann
-
2025/08/28
[ceph-users] Re: Debian Packages for Trixie
Jens Galsgaard
-
2025/08/28
[ceph-users] Debian Packages for Trixie
Andrew
-
2025/08/27
[ceph-users] Re: OSD crc errors: Faulty SSD?
Igor Fedotov
-
2025/08/27
[ceph-users] Re: OSD crc errors: Faulty SSD?
Anthony D'Atri
-
2025/08/27
[ceph-users] OSD crc errors: Faulty SSD?
Roland Giesler
-
2025/08/27
[ceph-users] Re: Preventing device zapping while replacing faulty drive (Squid 19.2.2)
Frédéric Nass
-
2025/08/27
[ceph-users] Re: cannot reach telemetry.ceph.com -> HEALTH_WARN after cluster upgrade to 19.2.3
Björn Lässig
-
2025/08/27
[ceph-users] Re: Add previously OSD to a new installation of a 3 nodes Proxmox CEPH
Eugen Block
-
2025/08/27
[ceph-users] Re: MGR does not generate prometheus config for ceph-exporter
Eugen Block
-
2025/08/26
[ceph-users] Re: cannot reach telemetry.ceph.com -> HEALTH_WARN after cluster upgrade to 19.2.3
Yaarit Hatuka
-
2025/08/26
[ceph-users] Ceph Steering Committee Meeting Minutes for 2025-08-25
Patrick Donnelly
-
2025/08/26
[ceph-users] cannot reach telemetry.ceph.com -> HEALTH_WARN after cluster upgrade to 19.2.3
Björn Lässig
-
2025/08/26
[ceph-users] MGR does not generate prometheus config for ceph-exporter
Adam Prycki
-
2025/08/25
[ceph-users] NVME-oF only with quay.ceph.io/ceph-ci/ceph:squid-nvmeof ?
Robert Sander
-
2025/08/25
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Burkhard Linke
-
2025/08/25
[ceph-users] Re: After disk failure not deep scrubbed pgs started to increase (ceph quincy)
Szabo, Istvan (Agoda)
-
2025/08/25
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Mevludin Blazevic
-
2025/08/25
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Janne Johansson
-
2025/08/24
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Mevludin Blazevic
-
2025/08/24
[ceph-users] For posterity: cct->_conf->osd_fast_shutdown_timeout OSD errors / Run Full Recovery from ONodes (might take a while) during Reef 18.2.1 to 18.2.7 upgrade
Anthony D'Atri
-
2025/08/24
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Anthony D'Atri