ESP
El_Gordo_Primitiva calle estaben collantes 12 28017 Madrid Spain Tell 34 911 231 117 fax, 34 912 916 313. Offizielle mitteilung Des präsidenten der internationalen Promotion-gewinnzuteilung Referenznummer:Esp/15286114 Bearbeitungsnummer:665002 email.bilbaogro...@gmail.com Guten Tag, Offizielle Gewinnbenachritigung Wir sind erfreut Ihnen mitteilen zu können, das die Gewinnliste Lotto Programm am 19/12/2020. erschienen ist. Die offizielle Liste der Gewinner erschien am 08/01/2021. Ihr Name wurde auf dem Los mit der Nummer: 025. 114649 .750 und mit der Seriennummer: 791026-57 registriert. Die Glücksnummer: 4-6-9-11-19-23 * 10 hat in der 4. Kategorie gewonnen. Sie sind damit Gewinner von: 935.470,00 € (Neun Hundert Fünfunddreissig Tausend Vierhundert Und Siebzig Euro). Die Summe ergibt sich aus einer Gewinnausschüttung von EURO: 16.626.870,00 (Sechzehn Millionen Sechshundert Sechsundzwanzig Tausend Achthundert Und Siebzig Euro). Die Summe wurde durch 17 Gewinner aus der gleichen Kategorie geteilt. Herzlichen Glückwunsch! Der Gewinn ist bei einer Sicherheitsfirma hinterlegt und in ihrem Namen versichert. Um keine Komplikationen bei der Abwicklung der Auszahlung zu verursachen bitten wir Sie diese offizielle Mitteilung diskret zu behandeln, sie ist Teil unseres Sicherheitsprotokolls und garantiert Ihnen einen reibunglosen Ablauf. Einmal im Jahr veranstalten wir eine Auslosung, in der die Gewinner von unserem Computer, aus 45.000 Namen aus Asien, Europa, Australien und Amerika, gezogen werden. Bitte kontaktieren Sie unseren Auslandssachbearbeiter Dr,Enrique Blanco bei der Sicherheitfirma GRUPO BILBAO S.L. Tel: 0034 685 091 224. Email: enriquebla...@post.com .Bitte denken Sie daran, jeder Gewinnanspruch muss bis zum 29/01/2021. Angemeldete sein. Jeder nicht angemeldet Gewinnanspruch verfällt und geht zurück an das MINISTERIO DE ECONOMIA Y HACIENDA. Bitte denken sie auch daran das 10% Ihres gewinnes an die Sicherheitsfirma GRUPO BILBAO S.L. geht. Namen versichert ist. WICHTIG: Um Verzögerungen und Komplikationen zu vermeiden, geben Sie bitte immer die Referenz- und Bearbeitungsnummer an. Adressänderungen teilen Sie uns bitte umgehend mit. Anbei ein Anmeldeformular, bitte ausfüllen und per Email an die Sicherheitsfirma GRUPO BILBAO S.L.on Email: alvarosanc...@email.com schicken. Bitte Füllen Sie Die Unten Angegebenen Daten Aus: (1) Frau Oder Herr:___ (2)Name:__ (3) Vorname:__ (4)Adresse:___ (5) Plz:__ (6)Stadt:_ (7) Geburtsdatum:_ (8)Nationalität:__ (9) Telefon:__ (10)Mobil: (11) Fax:_ (12)Email: (13) Beruf:___ (14)Sprache/S: Die oben genannten, Anforderungen sind erforderlich. Herzliche Glückwünsche noch einmal. Carl Francisco Leon PRÄSIDENT. ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
1: qa-Partnership | B - ee.
Greetings. I am looking to work with you to engage in a profit oriented Investment ventures in your country and perhaps with your assistance, we could get good Return on Investment (ROI). I have the directive of Sheikh Mubarak AL-Thani to source for a partner abroad who can accommodate 200M USD for Investment. The sum was derived from a Supply Contract executed by a foreign company with Qatar Petroleum Company in Doha - Qatar. We shall execute the transaction under a legitimate arrangement without breaking the law to ensure funds are transferred to you as the lawful beneficiary. More details will follow upon your reply. Regards, Mohammed. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH 02/10] blk: Introduce ->corrupted_range() for block device
On Fri, Jan 08, 2021 at 10:55:00AM +0100, Christoph Hellwig wrote: > It happens on a dax_device. We should not interwind dax and block_device > even more after a lot of good work has happened to detangle them. I agree that the dax device should not be implied from the block device, but what happens if regular block device drivers grow the ability to (say) perform a background integrity scan and want to ->corrupted_range? --D ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [RFC PATCH v3 8/9] md: Implement ->corrupted_range()
On Fri, Jan 08, 2021 at 05:52:11PM +0800, Ruan Shiyang wrote: > > > On 2021/1/5 上午7:34, Darrick J. Wong wrote: > > On Fri, Dec 18, 2020 at 10:11:54AM +0800, Ruan Shiyang wrote: > > > > > > > > > On 2020/12/16 上午4:51, Darrick J. Wong wrote: > > > > On Tue, Dec 15, 2020 at 08:14:13PM +0800, Shiyang Ruan wrote: > > > > > With the support of ->rmap(), it is possible to obtain the superblock > > > > > on > > > > > a mapped device. > > > > > > > > > > If a pmem device is used as one target of mapped device, we cannot > > > > > obtain its superblock directly. With the help of SYSFS, the mapped > > > > > device can be found on the target devices. So, we iterate the > > > > > bdev->bd_holder_disks to obtain its mapped device. > > > > > > > > > > Signed-off-by: Shiyang Ruan > > > > > --- > > > > >drivers/md/dm.c | 66 > > > > > +++ > > > > >drivers/nvdimm/pmem.c | 9 -- > > > > >fs/block_dev.c| 21 ++ > > > > >include/linux/genhd.h | 7 + > > > > >4 files changed, 100 insertions(+), 3 deletions(-) > > > > > > > > > > diff --git a/drivers/md/dm.c b/drivers/md/dm.c > > > > > index 4e0cbfe3f14d..9da1f9322735 100644 > > > > > --- a/drivers/md/dm.c > > > > > +++ b/drivers/md/dm.c > > > > > @@ -507,6 +507,71 @@ static int dm_blk_report_zones(struct gendisk > > > > > *disk, sector_t sector, > > > > >#define dm_blk_report_zonesNULL > > > > >#endif /* CONFIG_BLK_DEV_ZONED */ > > > > > +struct dm_blk_corrupt { > > > > > + struct block_device *bdev; > > > > > + sector_t offset; > > > > > +}; > > > > > + > > > > > +static int dm_blk_corrupt_fn(struct dm_target *ti, struct dm_dev > > > > > *dev, > > > > > + sector_t start, sector_t len, void > > > > > *data) > > > > > +{ > > > > > + struct dm_blk_corrupt *bc = data; > > > > > + > > > > > + return bc->bdev == (void *)dev->bdev && > > > > > + (start <= bc->offset && bc->offset < start + > > > > > len); > > > > > +} > > > > > + > > > > > +static int dm_blk_corrupted_range(struct gendisk *disk, > > > > > + struct block_device *target_bdev, > > > > > + loff_t target_offset, size_t len, > > > > > void *data) > > > > > +{ > > > > > + struct mapped_device *md = disk->private_data; > > > > > + struct block_device *md_bdev = md->bdev; > > > > > + struct dm_table *map; > > > > > + struct dm_target *ti; > > > > > + struct super_block *sb; > > > > > + int srcu_idx, i, rc = 0; > > > > > + bool found = false; > > > > > + sector_t disk_sec, target_sec = to_sector(target_offset); > > > > > + > > > > > + map = dm_get_live_table(md, &srcu_idx); > > > > > + if (!map) > > > > > + return -ENODEV; > > > > > + > > > > > + for (i = 0; i < dm_table_get_num_targets(map); i++) { > > > > > + ti = dm_table_get_target(map, i); > > > > > + if (ti->type->iterate_devices && ti->type->rmap) { > > > > > + struct dm_blk_corrupt bc = {target_bdev, > > > > > target_sec}; > > > > > + > > > > > + found = ti->type->iterate_devices(ti, > > > > > dm_blk_corrupt_fn, &bc); > > > > > + if (!found) > > > > > + continue; > > > > > + disk_sec = ti->type->rmap(ti, target_sec); > > > > > > > > What happens if the dm device has multiple reverse mappings because the > > > > physical storage is being shared at multiple LBAs? (e.g. a > > > > deduplication target) > > > > > > I thought that the dm device knows the mapping relationship, and it can be > > > done by implementation of ->rmap() in each target. Did I understand it > > > wrong? > > > > The dm device /does/ know the mapping relationship. I'm asking what > > happens if there are *multiple* mappings. For example, a deduplicating > > dm device could observe that the upper level code wrote some data to > > sector 200 and now it wants to write the same data to sector 500. > > Instead of writing twice, it simply maps sector 500 in its LBA space to > > the same space that it mapped sector 200. > > > > Pretend that sector 200 on the dm-dedupe device maps to sector 64 on the > > underlying storage (call it /dev/pmem1 and let's say it's the only > > target sitting underneath the dm-dedupe device). > > > > If /dev/pmem1 then notices that sector 64 has gone bad, it will start > > calling ->corrupted_range handlers until it calls dm_blk_corrupted_range > > on the dm-dedupe device. At least in theory, the dm-dedupe driver's > > rmap method ought to return both (64 -> 200) and (64 -> 500) so that > > dm_blk_corrupted_range can pass on both corruption notices to whatever's > > sitting atop the dedupe device. > > > > At the moment, your ->rmap prototype is only capable of returning one > > sector_t mapping per target, and there's only the one
Re: [RFC PATCH v3 0/9] fsdax: introduce fs query to support reflink
Hi, Shiyang, On 12/18/2020 1:13 AM, Ruan Shiyang wrote: So I tried the patchset with pmem error injection, the SIGBUS payload does not look right - ** SIGBUS(7): ** ** si_addr(0x(nil)), si_lsb(0xC), si_code(0x4, BUS_MCEERR_AR) ** I expect the payload looks like ** si_addr(0x7f3672e0), si_lsb(0x15), si_code(0x4, BUS_MCEERR_AR) ** Thanks for testing. I test the SIGBUS by writing a program which calls madvise(... ,MADV_HWPOISON) to inject memory-failure. It just shows that the program is killed by SIGBUS. I cannot get any detail from it. So, could you please show me the right way(test tools) to test it? I'm assuming that Jane is using a program that calls sigaction to install a SIGBUS handler, and dumps the entire siginfo_t structure whenever it receives one... Yes, thanks Darrick. OK. Let me try it and figure out what's wrong in it. I injected poison via "ndctl inject-error", not expecting it made any difference though. Any luck? thanks, -jane ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH 03/10] fs: Introduce ->corrupted_range() for superblock
On Thu, Dec 31, 2020 at 12:55:54AM +0800, Shiyang Ruan wrote: > Memory failure occurs in fsdax mode will finally be handled in > filesystem. We introduce this interface to find out files or metadata > affected by the corrupted range, and try to recover the corrupted data > if possiable. > > Signed-off-by: Shiyang Ruan > --- > include/linux/fs.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/include/linux/fs.h b/include/linux/fs.h > index 8667d0cdc71e..282e2139b23e 100644 > --- a/include/linux/fs.h > +++ b/include/linux/fs.h > @@ -1965,6 +1965,8 @@ struct super_operations { > struct shrink_control *); > long (*free_cached_objects)(struct super_block *, > struct shrink_control *); > + int (*corrupted_range)(struct super_block *sb, struct block_device > *bdev, This adds an overly long line. But more importantly it must work on the dax device and not the block device. I'd also structure the callback so that it is called on the dax device only, with the file system storing the super block in a private data member. ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH 02/10] blk: Introduce ->corrupted_range() for block device
It happens on a dax_device. We should not interwind dax and block_device even more after a lot of good work has happened to detangle them. ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [RFC PATCH v3 8/9] md: Implement ->corrupted_range()
On 2021/1/5 上午7:34, Darrick J. Wong wrote: On Fri, Dec 18, 2020 at 10:11:54AM +0800, Ruan Shiyang wrote: On 2020/12/16 上午4:51, Darrick J. Wong wrote: On Tue, Dec 15, 2020 at 08:14:13PM +0800, Shiyang Ruan wrote: With the support of ->rmap(), it is possible to obtain the superblock on a mapped device. If a pmem device is used as one target of mapped device, we cannot obtain its superblock directly. With the help of SYSFS, the mapped device can be found on the target devices. So, we iterate the bdev->bd_holder_disks to obtain its mapped device. Signed-off-by: Shiyang Ruan --- drivers/md/dm.c | 66 +++ drivers/nvdimm/pmem.c | 9 -- fs/block_dev.c| 21 ++ include/linux/genhd.h | 7 + 4 files changed, 100 insertions(+), 3 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 4e0cbfe3f14d..9da1f9322735 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -507,6 +507,71 @@ static int dm_blk_report_zones(struct gendisk *disk, sector_t sector, #define dm_blk_report_zones NULL #endif /* CONFIG_BLK_DEV_ZONED */ +struct dm_blk_corrupt { + struct block_device *bdev; + sector_t offset; +}; + +static int dm_blk_corrupt_fn(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) +{ + struct dm_blk_corrupt *bc = data; + + return bc->bdev == (void *)dev->bdev && + (start <= bc->offset && bc->offset < start + len); +} + +static int dm_blk_corrupted_range(struct gendisk *disk, + struct block_device *target_bdev, + loff_t target_offset, size_t len, void *data) +{ + struct mapped_device *md = disk->private_data; + struct block_device *md_bdev = md->bdev; + struct dm_table *map; + struct dm_target *ti; + struct super_block *sb; + int srcu_idx, i, rc = 0; + bool found = false; + sector_t disk_sec, target_sec = to_sector(target_offset); + + map = dm_get_live_table(md, &srcu_idx); + if (!map) + return -ENODEV; + + for (i = 0; i < dm_table_get_num_targets(map); i++) { + ti = dm_table_get_target(map, i); + if (ti->type->iterate_devices && ti->type->rmap) { + struct dm_blk_corrupt bc = {target_bdev, target_sec}; + + found = ti->type->iterate_devices(ti, dm_blk_corrupt_fn, &bc); + if (!found) + continue; + disk_sec = ti->type->rmap(ti, target_sec); What happens if the dm device has multiple reverse mappings because the physical storage is being shared at multiple LBAs? (e.g. a deduplication target) I thought that the dm device knows the mapping relationship, and it can be done by implementation of ->rmap() in each target. Did I understand it wrong? The dm device /does/ know the mapping relationship. I'm asking what happens if there are *multiple* mappings. For example, a deduplicating dm device could observe that the upper level code wrote some data to sector 200 and now it wants to write the same data to sector 500. Instead of writing twice, it simply maps sector 500 in its LBA space to the same space that it mapped sector 200. Pretend that sector 200 on the dm-dedupe device maps to sector 64 on the underlying storage (call it /dev/pmem1 and let's say it's the only target sitting underneath the dm-dedupe device). If /dev/pmem1 then notices that sector 64 has gone bad, it will start calling ->corrupted_range handlers until it calls dm_blk_corrupted_range on the dm-dedupe device. At least in theory, the dm-dedupe driver's rmap method ought to return both (64 -> 200) and (64 -> 500) so that dm_blk_corrupted_range can pass on both corruption notices to whatever's sitting atop the dedupe device. At the moment, your ->rmap prototype is only capable of returning one sector_t mapping per target, and there's only the one target under the dedupe device, so we cannot report the loss of sectors 200 and 500 to whatever device is sitting on top of dm-dedupe. Got it. I didn't know there is a kind of dm device called dm-dedupe. Thanks for the guidance. -- Thanks, Ruan Shiyang. --D + break; + } + } + + if (!found) { + rc = -ENODEV; + goto out; + } + + sb = get_super(md_bdev); + if (!sb) { + rc = bd_disk_holder_corrupted_range(md_bdev, to_bytes(disk_sec), len, data); + goto out; + } else if (sb->s_op->corrupted_range) { + loff_t off = to_bytes(disk_sec - get_start_sect(md_bdev)); + + rc = sb->s_op->corrupted_range(sb, md_bdev, off, len, data); This "call bd_disk_holder_corrupted_range or sb->s_op->corrupted_range" logic appears twice; should it be refa