I was trying to play around with some PFL layout today, and I ran into an 
issue.  I have a file system running Lustre 2.10.6 and a client with 2.10.0 
installed.  I created a PFL with this command:

[rfmohr@sip-login1 rfmohr]$ lfs setstripe -E 4M -c 2 -E 100M -c 4 comp_file

It did not return any errors, so I tried to query the layout:

[rfmohr@sip-login1 rfmohr]$ lfs getstripe comp_file
comp_file has no stripe info

And if I write any data to it, I end up with a file that uses the system’s 
default stripe count:

[rfmohr@sip-login1 rfmohr]$ dd if=/dev/zero of=comp_file bs=1M count=50
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 0.0825892 s, 635 MB/s

[rfmohr@sip-login1 rfmohr]$ lfs getstripe comp_file
comp_file
lmm_stripe_count:  1
lmm_stripe_size:   1048576
lmm_pattern:       1
lmm_layout_gen:    0
lmm_stripe_offset: 3
        obdidx           objid           objid           group
             3          265665        0x40dc1                0

I could not find a JIRA ticket that looked similar to this. Is this a known 
bug?  Or some odd interop issue?  When I tried the command on another file 
system that uses 2.10.3 on the servers and clients, I got the expected behavior:

-bash-4.2$ lfs setstripe -E 4M -c 2 -E 64M -c 4 comp_file

-bash-4.2$ lfs getstripe comp_file
comp_file
  lcm_layout_gen:  2
  lcm_entry_count: 2
    lcme_id:             1
    lcme_flags:          init
    lcme_extent.e_start: 0
    lcme_extent.e_end:   4194304
      lmm_stripe_count:  2
      lmm_stripe_size:   1048576
      lmm_pattern:       1
      lmm_layout_gen:    0
      lmm_stripe_offset: 6
      lmm_objects:
      - 0: { l_ost_idx: 6, l_fid: [0x100060000:0x8f84d:0x0] }
      - 1: { l_ost_idx: 7, l_fid: [0x100070000:0x8f72d:0x0] }

    lcme_id:             2
    lcme_flags:          0
    lcme_extent.e_start: 4194304
    lcme_extent.e_end:   67108864
      lmm_stripe_count:  4
      lmm_stripe_size:   1048576
      lmm_pattern:       1
      lmm_layout_gen:    65535
      lmm_stripe_offset: -1

--
Rick Mohr
Senior HPC System Administrator
National Institute for Computational Sciences
http://www.nics.tennessee.edu

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to