Hi All,

12.08.2016, 14:52, "Alexander Gordeev" <[email protected]>:
> Hi All,
>
> I hope I'm writing to the right mailing list. If not please give me 
> directions to the right place.
> I'm trying to write video archive to a microSD card on an ARM based IP 
> camera. The camera's SDK uses Linux 3.10.
> The kernel is quite old and F2FS is there since 3.8 AFAIK so it's probably 
> not mature yet. However, I decided to give it a try.
> The idea is to write video continuously in 5 minute chunks. I also have an 
> index file per each archive chunk file to for faster seeks and a single 
> SQLite database.
> When utilization is about 95%, the chunks and their indexes from the archive 
> tail are deleted. So it's like a ring buffer. Also the overprovision ratio is 
> the default 5%.
> It worked quite good for several days with about 95% utilization, but then 
> today it went bad. Writes are taking several seconds quite often as shown by 
> strace.
> vmstat shows that my process waits for IO most of the time:
>
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
>  r b swpd free buff cache si so bi bo in cs us sy id wa
>  2 2 0 6352 4 35924 0 0 8 88 562 1316 41 7 0 52
>  1 2 0 6324 4 35928 0 0 4 44 553 1231 40 8 0 52
>  1 2 0 6324 4 35928 0 0 0 0 690 1471 36 10 0 54
>  1 3 0 6296 4 35932 0 0 0 0 530 1242 40 5 0 54
>  2 2 0 6296 4 35936 0 0 4 48 545 1244 40 6 0 54
>  1 2 0 6296 4 35940 0 0 4 44 549 1275 39 6 0 55
>  2 2 0 6288 4 35944 0 0 4 44 563 1315 39 8 0 53
>  3 2 0 6296 4 35944 0 0 0 0 502 1158 41 2 0 57
>  1 3 0 6296 4 35952 0 0 8 88 700 1527 40 9 0 51
>  1 2 0 6296 4 35952 0 0 0 0 482 1141 38 8 0 55
>  1 2 0 6296 4 35956 0 0 4 44 594 1383 38 13 0 49
>  1 2 0 6296 4 35956 0 0 0 0 489 1160 37 5 0 58
>  5 1 0 6268 4 35980 0 0 12 132 704 1565 42 9 0 49
>  2 1 0 6268 4 35984 0 0 4 44 531 1215 39 10 0 51
>  3 2 0 6268 4 35992 0 0 8 92 714 1574 36 9 0 55
>  1 1 0 6268 4 35992 0 0 0 0 485 1163 39 6 0 55
>  1 1 0 6240 4 36000 0 0 8 92 553 1282 38 9 0 53
>  1 2 0 6240 4 36000 0 0 0 0 488 1135 39 7 0 54
>  1 1 0 6240 4 36000 0 0 0 0 552 1264 39 9 0 52
>  3 1 0 6240 4 36000 0 0 0 0 510 1187 40 6 0 54
>  1 2 0 6240 4 36004 0 0 4 44 674 1496 43 8 0 49
>  1 1 0 6240 4 36012 0 0 8 88 572 1373 39 9 0 53
>  4 1 0 6232 4 36016 0 0 4 48 549 1248 41 4 0 55
>  3 1 0 6240 4 36016 0 0 0 0 520 1209 36 8 0 55
>
> Here is also /sys/kernel/debug/f2fs/status for reference:
> =====[ partition info(sda). #0 ]=====
> [SB: 1] [CP: 2] [SIT: 4] [NAT: 118] [SSA: 60] [MAIN: 29646(OverProv:1529 
> Resv:50)]
>
> Utilization: 94% (13597314 valid blocks)
>   - Node: 16395 (Inode: 2913, Other: 13482)
>   - Data: 13580919
>
> Main area: 29646 segs, 14823 secs 14823 zones
>   - COLD data: 3468, 1734, 1734
>   - WARM data: 12954, 6477, 6477
>   - HOT data: 28105, 14052, 14052
>   - Dir dnode: 29204, 14602, 14602
>   - File dnode: 19960, 9980, 9980
>   - Indir nodes: 29623, 14811, 14811
>
>   - Valid: 13615
>   - Dirty: 13309
>   - Prefree: 0
>   - Free: 2722 (763)
>
> GC calls: 8622 (BG: 4311)
>   - data segments : 8560
>   - node segments : 62
> Try to move 3552161 blocks
>   - data blocks : 3540278
>   - node blocks : 11883
>
> Extent Hit Ratio: 49 / 4171
>
> Balancing F2FS Async:
>   - nodes 6 in 141
>   - dents 0 in dirs: 0
>   - meta 13 in 346
>   - NATs 16983 > 29120
>   - SITs: 17
>   - free_nids: 1861
>
> Distribution of User Blocks: [ valid | invalid | free ]
>   [-----------------------------------------------|-|--]
>
> SSR: 1230719 blocks in 14834 segments
> LFS: 15150190 blocks in 29589 segments
>
> BDF: 89, avg. vblocks: 949
>
> Memory: 6754 KB = static: 4763 + cached: 1990
>
> Please note that I tried to put all the archive and index files into the cold 
> area using the file extensions mechanism.
> And indeed I saw numbers close to the total amount of segments in the cold 
> area a couple of days ago.
> But now the cold area gets smaller very quickly. What am I doing wrong?
>
> Can this be fixed by using a backport from 
> git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-stable.git ?
> Also I measured the SD card erase block size using flashbench. It seems it is 
> 8MB, not 4MB, as I used here.
> Can this lead to such serious problems? Is 8MB block safe to hardcode or 
> should I use flashbench every time?
>
> Please help. Thanks for reading!

I used backported version of f2fs above. It helps a bit, but not much. This is 
the new status:

=====[ partition info(sda). #0, RW]=====
[SB: 1] [CP: 2] [SIT: 4] [NAT: 118] [SSA: 60] [MAIN: 29646(OverProv:1529 
Resv:50)]

Utilization: 94% (13608114 valid blocks)
  - Node: 17228 (Inode: 3793, Other: 13435)
  - Data: 13590886
  - Inline_xattr Inode: 0
  - Inline_data Inode: 52
  - Inline_dentry Inode: 0
  - Orphan Inode: 0

Main area: 29646 segs, 14823 secs 14823 zones
  - COLD  data: 11906, 5953, 5953
  - WARM  data: 11782, 5891, 5891
  - HOT   data: 11598, 5799, 5799
  - Dir   dnode: 9418, 4709, 4709
  - File   dnode: 11833, 5916, 5916
  - Indir nodes: 11382, 5691, 5691

  - Valid: 15671
  - Dirty: 12904
  - Prefree: 0
  - Free: 1071 (27)

CP calls: 3320 (BG: 0)
GC calls: 2240 (BG: 1)
  - data segments : 3866 (1236)
  - node segments : 243 (0)
Try to move 1123706 blocks (BG: 429924)
  - data blocks : 1012524 (429924)
  - node blocks : 111182 (0)

Extent Cache:
  - Hit Count: L1-1:55840 L1-2:14504 L2:1926
  - Hit Ratio: 3% (72270 / 2221766)
  - Inner Struct Count: tree: 570(0), node: 3

Balancing F2FS Async:
  - inmem:    0, wb_bios:    2
  - nodes:    0 in  239
  - dents:    0 in dirs:   0 (   0)
  - datas:  706 in files:   0
  - meta:    0 in  512
  - NATs:         0/      117
  - SITs:         0/    29646
  - free_nids:      1575

Distribution of User Blocks: [ valid | invalid | free ]
  [-----------------------------------------------|---|]

IPU: 1960 blocks
SSR: 18200 blocks in 71 segments
LFS: 2136736 blocks in 4173 segments

BDF: 87, avg. vblocks: 862

Memory: 9437 KB
  - static: 6385 KB
  - cached: 47 KB
  - paged : 3004 KB


I also have a patch for branch linux-3.10.y. I doesn't build without the patch.

-- 
 Alexander

------------------------------------------------------------------------------
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. http://sdm.link/zohodev2dev
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to