)]}' { "commit": "6cda72047ea46272ecb9cc71acf1231cea07167a", "tree": "e352a0b7bf692432e982ac703cf120f34956f19f", "parents": [ "ef69bc9f689de8380688be742f9b9df615d42429" ], "author": { "name": "Wei Yang", "email": "richard.weiyang@linux.alibaba.com", "time": "Thu Aug 06 23:23:59 2020 -0700" }, "committer": { "name": "Linus Torvalds", "email": "torvalds@linux-foundation.org", "time": "Fri Aug 07 11:33:27 2020 -0700" }, "message": "mm/sparse: only sub-section aligned range would be populated\n\nThere are two code path which invoke __populate_section_memmap()\n\n * sparse_init_nid()\n * sparse_add_section()\n\nFor both case, we are sure the memory range is sub-section aligned.\n\n * we pass PAGES_PER_SECTION to sparse_init_nid()\n * we check range by check_pfn_span() before calling\n sparse_add_section()\n\nAlso, the counterpart of __populate_section_memmap(), we don\u0027t do such\ncalculation and check since the range is checked by check_pfn_span() in\n__remove_pages().\n\nClear the calculation and check to keep it simple and comply with its\ncounterpart.\n\nSigned-off-by: Wei Yang \u003crichard.weiyang@linux.alibaba.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nAcked-by: David Hildenbrand \u003cdavid@redhat.com\u003e\nLink: http://lkml.kernel.org/r/20200703031828.14645-1-richard.weiyang@linux.alibaba.com\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n", "tree_diff": [ { "type": "modify", "old_id": "41eeac67723bcf5c9dad7a508fec1e68abf5ef04", "old_mode": 33188, "old_path": "mm/sparse-vmemmap.c", "new_id": "16183d85a7d505538a14bc1d958ae672a547148d", "new_mode": 33188, "new_path": "mm/sparse-vmemmap.c" } ] }