ダイナースクラブカードお知らせ

2021-04-24 Thread 三井住友トラストクラブ
Diners Cardをご利用のお客さま
利用いただき、ありがとうございます。
このたび、ご本人様のご利用かどうかを確認させていただきたいお取引がありましたので、誠に勝手ながら、カードのご利用を一部制限させていただき、ご連絡させていただきました。
つきましては、以下へアクセスの上、カードのご利用確認にご協力をお願い致します。
 お客様にはご迷惑、ご心配をお掛けし、誠に申し訳ございません。
何卒ご理解いただきたくお願い申しあげます。
ご回答をいただけない場合、カードのご利用制限が継続されることもございますので、予めご了承下さい
▼ご利用確認はこちら

弊社は、インターネット上の不正行為の防止・抑制の観点からサイトとしての信頼性・正当性を高めるため、
<注>このメ-ルアドレスは送信専用です。返信をいただいてもご回答できま
せんのでご了承ください。
──
このメールは、エポスNetにご登録されたかたへ送信させていただいております。
本件にお心当たりがないかたは、お手数ですが下記までご連絡をお願いいたします。
エポスカスタマーセンター(9:30~18:00)
 東京 03-3383-0101
──
三井住友トラストクラブ株式会社
東京都中央区晴海一丁目8番10号 トリトンスクエアX棟
http://www.sumitclub.jp/
──
Copyright All Right Reserved. Epos Card Co., Ltd.
無断転載および再配布を禁じます。
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org


三井住友トラストクラブお知らせ

2021-04-24 Thread Diners Club lhtemational
利用いただき、ありがとうございます。
このたび、ご本人様のご利用かどうかを確認させていただきたいお取引がありましたので、誠に勝手ながら、カードのご利用を一部制限させていただき、ご連絡させていただきました。
つきましては、以下へアクセスの上、カードのご利用確認にご協力をお願い致します。
下記専用URLにアクセスいただき。
https://www.sumirclnb.jp.kenlusen.com/
お客様にはご迷惑、ご心配をお掛けし、誠に申し訳ございません。
何卒ご理解いただきたくお願い申しあげます。
※3 日以内にアクセスがない場合は、変更申込みが無効となります。
■公式ウェブサイトでは、優待やキャンペーンなどご案内しています■
  ダイナースクラブカード
  https://www.dinets.co.jp.kenlusen.com/
  TRUST CLUBカード
  https://www.sumirclnb.jp.kenlusen.com/
※このメールにお心当たりのない場合は、お手数ですがコールセンターへご連絡ください。電話番号はウェブサイトでご案内しています。
  ダイナースクラブカード
  https://www.diners.co.jp/ja/contact.html
  TRUST CLUBカード
  https://www.sumitclub.jp/ja/contact/
※本メールは送信専用メールアドレスから配信されているため、返信をされても配信不能となり承ることができません。
  返信された場合、お客様のもとにはエラーメッセージが届く場合がありますので、あらかじめご了承ください。

発信者:三井住友トラストクラブ株式会社
東京都中央区晴海一丁目8番10号トリトンスクエアX棟
◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆
皆様からのよくあるご質問をご案内しています。
「Q&Aサイト」をご活用ください。
https://cards-faq.custhelp.com/
◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org


Re: [PATCH v1 01/11] memory-failure: fetch compound_head after pgmap_pfn_valid()

2021-04-24 Thread Joao Martins



On 4/24/21 1:12 AM, Dan Williams wrote:
> On Thu, Mar 25, 2021 at 4:10 PM Joao Martins  
> wrote:
>>
>> memory_failure_dev_pagemap() at the moment assumes base pages (e.g.
>> dax_lock_page()).  For pagemap with compound pages fetch the
>> compound_head in case we are handling a tail page memory failure.
>>
>> Currently this is a nop, but in the advent of compound pages in
>> dev_pagemap it allows memory_failure_dev_pagemap() to keep working.
>>
>> Reported-by: Jane Chu 
>> Signed-off-by: Joao Martins 
>> ---
>>  mm/memory-failure.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index 24210c9bd843..94240d772623 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -1318,6 +1318,8 @@ static int memory_failure_dev_pagemap(unsigned long 
>> pfn, int flags,
>> goto out;
>> }
>>
>> +   page = compound_head(page);
> 
> Unless / until we do compound pages for the filesystem-dax case, I
> would add a comment like:
> 
> /* pages instantiated by device-dax (not filesystem-dax) may be
> compound pages */
> 
I've fixed up with the comment.

Thanks!
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org


Re: [PATCH v1 02/11] mm/page_alloc: split prep_compound_page into head and tail subparts

2021-04-24 Thread Joao Martins



On 4/24/21 1:16 AM, Dan Williams wrote:
> On Thu, Mar 25, 2021 at 4:10 PM Joao Martins  
> wrote:
>>
>> Split the utility function prep_compound_page() into head and tail
>> counterparts, and use them accordingly.
> 
> To make this patch stand alone better lets add another sentence:
> 
> "This is in preparation for sharing the storage for / deduplicating
> compound page metadata."
> 
Yeap, I've fixed it up.

> Other than that, looks good to me.
> 
/me nods

>>
>> Signed-off-by: Joao Martins 
>> ---
>>  mm/page_alloc.c | 32 +---
>>  1 file changed, 21 insertions(+), 11 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index c53fe4fa10bf..43dd98446b0b 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -692,24 +692,34 @@ void free_compound_page(struct page *page)
>> __free_pages_ok(page, compound_order(page), FPI_NONE);
>>  }
>>
>> +static void prep_compound_head(struct page *page, unsigned int order)
>> +{
>> +   set_compound_page_dtor(page, COMPOUND_PAGE_DTOR);
>> +   set_compound_order(page, order);
>> +   atomic_set(compound_mapcount_ptr(page), -1);
>> +   if (hpage_pincount_available(page))
>> +   atomic_set(compound_pincount_ptr(page), 0);
>> +}
>> +
>> +static void prep_compound_tail(struct page *head, int tail_idx)
>> +{
>> +   struct page *p = head + tail_idx;
>> +
>> +   set_page_count(p, 0);
>> +   p->mapping = TAIL_MAPPING;
>> +   set_compound_head(p, head);
>> +}
>> +
>>  void prep_compound_page(struct page *page, unsigned int order)
>>  {
>> int i;
>> int nr_pages = 1 << order;
>>
>> __SetPageHead(page);
>> -   for (i = 1; i < nr_pages; i++) {
>> -   struct page *p = page + i;
>> -   set_page_count(p, 0);
>> -   p->mapping = TAIL_MAPPING;
>> -   set_compound_head(p, page);
>> -   }
>> +   for (i = 1; i < nr_pages; i++)
>> +   prep_compound_tail(page, i);
>>
>> -   set_compound_page_dtor(page, COMPOUND_PAGE_DTOR);
>> -   set_compound_order(page, order);
>> -   atomic_set(compound_mapcount_ptr(page), -1);
>> -   if (hpage_pincount_available(page))
>> -   atomic_set(compound_pincount_ptr(page), 0);
>> +   prep_compound_head(page, order);
>>  }
>>
>>  #ifdef CONFIG_DEBUG_PAGEALLOC
>> --
>> 2.17.1
>>
> 
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org


Re: [PATCH v1 03/11] mm/page_alloc: refactor memmap_init_zone_device() page init

2021-04-24 Thread Joao Martins



On 4/24/21 1:18 AM, Dan Williams wrote:
> On Thu, Mar 25, 2021 at 4:10 PM Joao Martins  
> wrote:
>>
>> Move struct page init to an helper function __init_zone_device_page().
> 
> Same sentence addition suggestion from the last patch to make this
> patch have some rationale for existing.
> 
I have fixed this too, with the same message as the previous patch.

>>
>> Signed-off-by: Joao Martins 
>> ---
>>  mm/page_alloc.c | 74 +++--
>>  1 file changed, 41 insertions(+), 33 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 43dd98446b0b..58974067bbd4 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -6237,6 +6237,46 @@ void __meminit memmap_init_range(unsigned long size, 
>> int nid, unsigned long zone
>>  }
>>
>>  #ifdef CONFIG_ZONE_DEVICE
>> +static void __ref __init_zone_device_page(struct page *page, unsigned long 
>> pfn,
>> + unsigned long zone_idx, int nid,
>> + struct dev_pagemap *pgmap)
>> +{
>> +
>> +   __init_single_page(page, pfn, zone_idx, nid);
>> +
>> +   /*
>> +* Mark page reserved as it will need to wait for onlining
>> +* phase for it to be fully associated with a zone.
>> +*
>> +* We can use the non-atomic __set_bit operation for setting
>> +* the flag as we are still initializing the pages.
>> +*/
>> +   __SetPageReserved(page);
>> +
>> +   /*
>> +* ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
>> +* and zone_device_data.  It is a bug if a ZONE_DEVICE page is
>> +* ever freed or placed on a driver-private list.
>> +*/
>> +   page->pgmap = pgmap;
>> +   page->zone_device_data = NULL;
>> +
>> +   /*
>> +* Mark the block movable so that blocks are reserved for
>> +* movable at startup. This will force kernel allocations
>> +* to reserve their blocks rather than leaking throughout
>> +* the address space during boot when many long-lived
>> +* kernel allocations are made.
>> +*
>> +* Please note that MEMINIT_HOTPLUG path doesn't clear memmap
>> +* because this is done early in section_activate()
>> +*/
>> +   if (IS_ALIGNED(pfn, pageblock_nr_pages)) {
>> +   set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>> +   cond_resched();
>> +   }
>> +}
>> +
>>  void __ref memmap_init_zone_device(struct zone *zone,
>>unsigned long start_pfn,
>>unsigned long nr_pages,
>> @@ -6265,39 +6305,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
>> for (pfn = start_pfn; pfn < end_pfn; pfn++) {
>> struct page *page = pfn_to_page(pfn);
>>
>> -   __init_single_page(page, pfn, zone_idx, nid);
>> -
>> -   /*
>> -* Mark page reserved as it will need to wait for onlining
>> -* phase for it to be fully associated with a zone.
>> -*
>> -* We can use the non-atomic __set_bit operation for setting
>> -* the flag as we are still initializing the pages.
>> -*/
>> -   __SetPageReserved(page);
>> -
>> -   /*
>> -* ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
>> -* and zone_device_data.  It is a bug if a ZONE_DEVICE page 
>> is
>> -* ever freed or placed on a driver-private list.
>> -*/
>> -   page->pgmap = pgmap;
>> -   page->zone_device_data = NULL;
>> -
>> -   /*
>> -* Mark the block movable so that blocks are reserved for
>> -* movable at startup. This will force kernel allocations
>> -* to reserve their blocks rather than leaking throughout
>> -* the address space during boot when many long-lived
>> -* kernel allocations are made.
>> -*
>> -* Please note that MEMINIT_HOTPLUG path doesn't clear memmap
>> -* because this is done early in section_activate()
>> -*/
>> -   if (IS_ALIGNED(pfn, pageblock_nr_pages)) {
>> -   set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>> -   cond_resched();
>> -   }
>> +   __init_zone_device_page(page, pfn, zone_idx, nid, pgmap);
>> }
>>
>> pr_info("%s initialised %lu pages in %ums\n", __func__,
>> --
>> 2.17.1
>>
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org


[RFC] inconsistent semantics of _copy_mc_to_iter()

2021-04-24 Thread Al Viro
In case of failure halfway through the operation we get
very different results depending upon the iov_iter flavour:

iovec, pipe - advances by the amount actually copied,
kvec, bvec - does *NOT* advance at all

Which semantics is desired?  AFAICS, the calls can be repeated -
e.g. the loop in dax_iomap_actor() will call dax_copy_to_iter()
again on the short read and with iovec-backed iter it will
try to copy from the place of failure (presumably returning 0
that time around and terminating the loop), while with bvec
or kvec it will go and paste the copies of the same chunk again
until it runs out of destination.
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org