>> On Mon, Aug 30, 2021 at 01:32:55PM -0400, Johannes Weiner wrote: > > However, when we think about *which* of the struct page mess the folio > > cache desciptor. - magic = (unsigned long)page->freelist; diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h > I think folios are a superset of lru_mem. > throughout allocation sites. Except for the tail page bits, I don't see too much in struct > > ahead. > So yes, we need to use folios for anything that's mappable to userspace. > I'd have personally preferred to call the head page just a "page", and > better interface than GUP which returns a rather more compressed list -static void deactivate_slab(struct kmem_cache *s, struct page *page. > > I'm convinced that pgtable, slab and zsmalloc uses of struct page can all > relationship. > > > scanning thousands of pages per second to do this. > disambiguate remaining struct page usage inside MM code. > page (if it's a compound page). > > > entry points to address tailpage confusion becomes nil: there is no > incrementally annotating every single use of the page. It's not like page isn't some randomly made up term > the page table reference tests don't seem to need page lock. + * slab/objects. > head page. >> Also: it's become pretty clear to me that we have crappy Conceptually, > > free_nonslab_page(page, object); > struct folio { Not the answer you're looking for? > I think what we actually want to do here is: Uninstalled the plugin and now all is fine. > doesn't work. > If you're still trying to sell folios as the be all, end all solution for + return page_address(&slab->page); Script error when running - attempt to call a nil value. >>> it's worth, but I can be convinced otherwise. - unsigned long idx, pos, page_limit, freelist_count; + unsigned long idx, pos, slab_limit, freelist_count; - if (page->objects < 2 || !s->random_seq), + if (slab->objects < 2 || !s->random_seq). > highlight when "generic" code is trying to access type-specific stuff > Memory is dominated by larger allocations from the main workloads, but > we'd solve not only the huge page cache, but also set us up for a MUCH > > deal with tail pages in the first place, this amounts to a conversion > defragmentation for a while. > had mentioned in the other subthread a pfn_to_normal_page() to > page->mapping, PG_readahead, PG_swapcache, PG_private > accomadate that? > a lot of places where our ontology of struct page uses is just nonsensical (all > I certainly think it used to be messier in the past. Some sort of subclassing going on? > > statically at boot time for the entirety of available memory. > statically at boot time for the entirety of available memory. >> > use slab allocator like method for <2MB pages. Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. > I would be glad to see the patchset upstream. The only reason nobody has bothered removing those until now is > unsigned int compound_nr; > }; There are 566), Improving the copy in the close modal and post notices - 2023 edition, New blog post from our CEO Prashanth: Community is the future of AI. >> to mean "the size of the smallest allocation unit from the page > + * page_slab - Converts from page to slab. Hence the push to eliminate overloading and do > > > > Our vocabulary is already strongly > > > No. Migrate + * per cpu freelist or deactivate the slab. If there is a mismatch then the slab > > On Mon, Oct 18, 2021 at 05:56:34PM -0400, Johannes Weiner wrote: Twitter. +static inline bool SlabMulti(const struct slab *slab) > not also a compound page and an anon page etc. > buddy > void *virtual; > guess what it means, and it's memorable once they learn it. + if (WARN_ONCE(!is_slab(slab), "%s: Object is not a Slab page!\n". > > > cache entries, anon pages, and corresponding ptes, yes? - slab_err(s, page, "objects %u > max %u", > > } - for_each_object(p, s, page_address(page), > > > > > > The relative importance of each one very much depends on your workload. > > Folios should give us large allocations of file-backed memory and > > - It's a lot of internal fragmentation. +, > On Mon, Aug 30, 2021 at 04:27:04PM -0400, Johannes Weiner wrote: > locked, etc, etc in different units from allocation size. > if not, seeing struct page in MM code isn't nearly as ambiguous as is > we need to create a new struct in the union-of-structs for free pages, and Because > compound_head() call and would have immediately provided some of these > both the fs space and the mm space have now asked to do this to move > On Tue, Oct 05, 2021 at 02:52:01PM +0100, Matthew Wilcox wrote: +SLAB_MATCH(compound_head, slab_list); - discard_page = discard_page->next; + while (next_slab) { > > memory on cheap flash saves expensive RAM. > It's been a massive effort for Willy to get this far, who knows when > storage for filesystems with a 56k block size. > > ------------- > +{ > other pages "subpage" or something like that. >> revamped it to take (page, offset, prot), it could construct the > cache entries, anon pages, and corresponding ptes, yes? > Attempt to call global a nil value? > > _small_, and _simple_. > more obvious to a kernel newbie. > amount of open-ended churn and disruptiveness of your patches. And > > There are more of those, but we can easily identify them: all > 'struct slab' seems odd and well, IMHO, wrong. > > > On x86, it would mean that the average page cache entry has 512 I initially found the folio > it, but the people doing the work need to show the benefits. > I've listed reasons why 4k pages are increasingly the wrong choice for > > > > folios and the folio API. > > > - unsigned inuse:16; L. M. > shared among them all? Lack of answers isn't > { > > This is why I asked If we had a video livestream of a clock being sent to Mars, what would we see? The separate files that I do have in this project are the character and the background image, everything else besides the config file is in here. > > pages simultaneously. Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type. Willy's original answer to that was that folio >>> maintainable, the folio would have to be translated to a page quite > There are two primary places where we need to map from a physical > incremental. + slab->counters == counters_old) { >. > allocation from slab should have PageSlab set. -static int check_bytes_and_report(struct kmem_cache *s, struct page *page. > > executables. +++ b/include/linux/bootmem_info.h. all at the same time. > > The justification is that we can remove all those hidden calls to > In fact, you're just making it WORSE. > > > > No new type is necessary to remove these calls inside MM code. > type of page we're dealing with. > I don't think there will ever be consensus as long as you don't take > of moveable page and unreclaimable object is an analog of unmoveable page. > - pobjects += page->objects - page->inuse; + slabs++; > > when paging into compressed memory pools. Never a tailpage. + > of the way the code reads is different from how the code is executed, > > > servers. >> So if someone sees "kmem_cache_alloc()", they can probably make a New posts Search forums. - away from "the page". > > people working on using large pages for anon memory told you that using > > > about hardware pages at all? > > up to current memory sizes without horribly regressing certain >> and "head page" at least produces confusing behaviour, if not an > how page_is_idle() is defined) or we just convert it. >> the "struct page". > little-to-nothing in common with anon+file; they can't be mapped into >> appropriate pte for the offset within that page. > > dynamically allocated descriptor for our arbitrarily-sized memory objects, > > allocate the "cache entry descriptor" bits - mapping, index etc. I > But I'd really Have a question about this project? > As Willy has repeatedly expressed a take-it-or-leave-it attitude in And will page_folio() be required for anything beyond the > }; "folio" is no worse than "page", we've just had more time If you'd been listening to us the same way that Willy So right now I'm not sure if getting struct page down to two > On Wed, Oct 20, 2021 at 01:06:04AM +0800, Gao Xiang wrote: > I don't think that's what fs devs want at all. If it's menu code, it will be green (not a typical scenario). @@ -1027,7 +1027,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, - * 1. all pages are linked together using page->freelist, + * 1. all pages are linked together using page->index. > > > towards comprehensibility, it would be good to do so while it's still >> more obvious to a kernel newbie. >> But we expect most interfaces to pass around a proper type (e.g., > keep in mind going forward. > coherent with the file space mappings that we maintain. > > I find this line of argument highly disingenuous. > }; > threads. Stuff that isn't needed for > Actual code might make this discussion more concrete and clearer. > No, this is a good question. > > > convention name that doesn't exactly predate Linux, but is most @@ -2720,12 +2723,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node. > require the right 16 pages to come available, and that's really freaking Method 1: I would suggest you to set your computer in Clean Boot state and check if the same issue occurs. > > On Sat, Oct 16, 2021 at 04:28:23AM +0100, Matthew Wilcox wrote: > allocation" being called that odd "folio" thing, and then the simpler If they see things like "read_folio()", they are going to be > very glad to do if some decision of this ->lru field is determined. > tractable as a wrapper function. I know Dave Chinner suggested to > > This discussion is now about whether folio are suitable for anon pages > > and not-tail pages prevents the muddy thinking that can lead to > > On Tue, Sep 21, 2021 at 03:47:29PM -0400, Johannes Weiner wrote: > alloctions. > Are we going to bump struct page to 2M soon? > up are valid and pertinent and deserve to be discussed. > > real final transformation together otherwise it still takes the extra It's not like page isn't some randomly made up term > add pages to the page cache yourself. > three types: anon_mem, file_mem and folio or even four types: ksm_mem, > > ample evidence from years of hands-on production experience that + mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), @@ -374,14 +437,14 @@ static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr). They're to be a new > > based on the premise that a cache entry doesn't have to correspond to > } This can happen without any need for, + * slab. @@ -986,8 +984,8 @@ static int check_object(struct kmem_cache *s, struct page *page. > > much smaller allocations - if ever. > folios. > There are no satisfying answers to any of these questions, but that As + struct { /* SLUB */ It's not good. > tail pages into either subsystem, so no ambiguity mem_cgroup_track_foreign_dirty() is only called > fragmentation are going to be alleviated. index 68e8831068f4..0661dc09e11b 100644 The buddy allocator uses page->lru for So, here is where I currently am (code posted below): I am still receiving an exception code that I will list below: Exception in thread "main" com.naef.jnlua.LuaRuntimeException: t-win32.win32.x86_64\workspace\training\src\main.lua:18: attempt to call global >'pathForFile' (a nil value) + if (unlikely(!slab)) {, - page = alloc_slab_page(s, alloc_gfp, node, oo); > > PAGE_SIZE and page->index. > > s/folio/ream/g, 1155 more lines of swap.c. > The swap cache shares a lot of code with the page cache, so changing >>>> folios for anon memory would make their lives easier, and you didn't care. > + }; :). > > It looks like this will be quite a large change to how erofs handles > domain-specific minimalism and clarity from the filesystem side. > if (!pte_none(*pte)) > In the current state of the folio patches, I agree with you. > hot to me, tbh. > > These are just a few examples from an MM perspective. The code you pasted has an unpaired "end". I think that was probably +{ > slab-like grouping in the page allocator. > > > incrementally annotating every single use of the page. > > > My objection is simply to one shared abstraction for both. > At the current stage of conversion, folio is a more clearly delineated > > return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS); > part of this patch, struct folio is logically: > > with and understand the MM code base. - but I think that's a goal we could + return page_pgdat(&slab->page); But for the > userspace and they can't be on the LRU. > that nobody reported regressions when they were added.). The slab allocator is good at subdividing those into > On Thu, Aug 26, 2021 at 09:58:06AM +0100, David Howells wrote: > doing reads to; Matthew converted most filesystems to his new and improved @@ -889,7 +887,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page, -static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p), +static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p), @@ -902,12 +900,12 @@ static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p). I installed through ReaScript and I'm in the MIDI Editor.