Message ID | 20120104030355.GL19274@valinux.co.jp |
---|---|
State | New |
Headers | show |
On 01/04/2012 05:03 AM, Isaku Yamahata wrote: > Yes, it's quite doable in user space(qemu) with a kernel-enhancement. > And it would be easy to convert a separated daemon process into a thread > in qemu. > > I think it should be done out side of qemu process for some reasons. > (I just repeat same discussion at the KVM-forum because no one remembers > it) > > - ptrace (and its variant) > Some people want to investigate guest ram on host (qemu stopped or lively). > For example, enhance crash utility and it will attach qemu process and > debug guest kernel. To debug the guest kernel you don't need to stop qemu itself. I agree it's a problem for qemu debugging though. > > - core dump > qemu process may core-dump. > As postmortem analysis, people want to investigate guest RAM. > Again enhance crash utility and it will read the core file and analyze > guest kernel. > When creating core, the qemu process is already dead. Yes, strong point. > It precludes the above possibilities to handle fault in qemu process. I agree.
Hi, Sorry to jump to hijack the thread like that , however i would like to just to inform you that we recently achieve a milestone out of the research project I'm leading. We enhanced KVM in order to deliver post copy live migration using RDMA at kernel level. Few point on the architecture of the system : * RDMA communication engine in kernel ( you can use soft iwarp or soft ROCE if you don't have hardware acceleration, however we also support standard RDMA enabled NIC) . * Naturally Page are transferred with Zerop copy protocol * Leverage the async page fault system. * Pre paging / faulting * No context switch as everything is handled within kernel and using the page fault system. * Hybrid migration ( pre + post copy) available * Rely on an independent Kernel Module * No modification to the KVM kernel Module * Minimal Modification to the Qemu-Kvm code * We plan to add the page prioritization algo in order to optimise the pre paging algo and background transfer You can learn a little bit more and see a demo here: http://tinyurl.com/8xa2bgl I hope to be able to provide more detail on the design soon. As well as more concrete demo of the system ( live migration of VM running large enterprise apps such as ERP or In memory DB) Note: this is just a step stone as the post copy live migration mainly enable us to validate the architecture design and code. Regards Benoit Regards Benoit On 12 January 2012 13:59, Avi Kivity <avi@redhat.com> wrote: > On 01/04/2012 05:03 AM, Isaku Yamahata wrote: >> Yes, it's quite doable in user space(qemu) with a kernel-enhancement. >> And it would be easy to convert a separated daemon process into a thread >> in qemu. >> >> I think it should be done out side of qemu process for some reasons. >> (I just repeat same discussion at the KVM-forum because no one remembers >> it) >> >> - ptrace (and its variant) >> Some people want to investigate guest ram on host (qemu stopped or lively). >> For example, enhance crash utility and it will attach qemu process and >> debug guest kernel. > > To debug the guest kernel you don't need to stop qemu itself. I agree > it's a problem for qemu debugging though. > >> >> - core dump >> qemu process may core-dump. >> As postmortem analysis, people want to investigate guest RAM. >> Again enhance crash utility and it will read the core file and analyze >> guest kernel. >> When creating core, the qemu process is already dead. > > Yes, strong point. > >> It precludes the above possibilities to handle fault in qemu process. > > I agree. > > > -- > error compiling committee.c: too many arguments to function > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
(2012/01/13 10:09), Benoit Hudzia wrote: > Hi, > > Sorry to jump to hijack the thread like that , however i would like > to just to inform you that we recently achieve a milestone out of the > research project I'm leading. We enhanced KVM in order to deliver > post copy live migration using RDMA at kernel level. > > Few point on the architecture of the system : > > * RDMA communication engine in kernel ( you can use soft iwarp or soft > ROCE if you don't have hardware acceleration, however we also support > standard RDMA enabled NIC) . > * Naturally Page are transferred with Zerop copy protocol > * Leverage the async page fault system. > * Pre paging / faulting > * No context switch as everything is handled within kernel and using > the page fault system. > * Hybrid migration ( pre + post copy) available > * Rely on an independent Kernel Module > * No modification to the KVM kernel Module > * Minimal Modification to the Qemu-Kvm code > * We plan to add the page prioritization algo in order to optimise the > pre paging algo and background transfer > > > You can learn a little bit more and see a demo here: > http://tinyurl.com/8xa2bgl > I hope to be able to provide more detail on the design soon. As well > as more concrete demo of the system ( live migration of VM running > large enterprise apps such as ERP or In memory DB) > > Note: this is just a step stone as the post copy live migration mainly > enable us to validate the architecture design and code. Do you have any plan to send the patch series of your implementation? Takuya
Very interesting. We can cooperate for better (postcopy) live migration. The code doesn't seem available yet, I'm eager for it. On Fri, Jan 13, 2012 at 01:09:30AM +0000, Benoit Hudzia wrote: > Hi, > > Sorry to jump to hijack the thread like that , however i would like > to just to inform you that we recently achieve a milestone out of the > research project I'm leading. We enhanced KVM in order to deliver > post copy live migration using RDMA at kernel level. > > Few point on the architecture of the system : > > * RDMA communication engine in kernel ( you can use soft iwarp or soft > ROCE if you don't have hardware acceleration, however we also support > standard RDMA enabled NIC) . Do you mean infiniband subsystem? > * Naturally Page are transferred with Zerop copy protocol > * Leverage the async page fault system. > * Pre paging / faulting > * No context switch as everything is handled within kernel and using > the page fault system. > * Hybrid migration ( pre + post copy) available Ah, I've been also planing this. After pre-copy phase, is the dirty bitmap sent? So far I've thought naively that pre-copy phase would be finished by the number of iterations. On the other hand your choice is timeout of pre-copy phase. Do you have rationale? or it was just natural for you? > * Rely on an independent Kernel Module > * No modification to the KVM kernel Module > * Minimal Modification to the Qemu-Kvm code > * We plan to add the page prioritization algo in order to optimise the > pre paging algo and background transfer Where do you plan to implement? in qemu or in your kernel module? This algo could be shared. thanks in advance. > You can learn a little bit more and see a demo here: > http://tinyurl.com/8xa2bgl > I hope to be able to provide more detail on the design soon. As well > as more concrete demo of the system ( live migration of VM running > large enterprise apps such as ERP or In memory DB) > > Note: this is just a step stone as the post copy live migration mainly > enable us to validate the architecture design and code. > > Regards > Benoit > > > > > > > > Regards > Benoit > > > On 12 January 2012 13:59, Avi Kivity <avi@redhat.com> wrote: > > On 01/04/2012 05:03 AM, Isaku Yamahata wrote: > >> Yes, it's quite doable in user space(qemu) with a kernel-enhancement. > >> And it would be easy to convert a separated daemon process into a thread > >> in qemu. > >> > >> I think it should be done out side of qemu process for some reasons. > >> (I just repeat same discussion at the KVM-forum because no one remembers > >> it) > >> > >> - ptrace (and its variant) > >> ?? Some people want to investigate guest ram on host (qemu stopped or lively). > >> ?? For example, enhance crash utility and it will attach qemu process and > >> ?? debug guest kernel. > > > > To debug the guest kernel you don't need to stop qemu itself. ?? I agree > > it's a problem for qemu debugging though. > > > >> > >> - core dump > >> ?? qemu process may core-dump. > >> ?? As postmortem analysis, people want to investigate guest RAM. > >> ?? Again enhance crash utility and it will read the core file and analyze > >> ?? guest kernel. > >> ?? When creating core, the qemu process is already dead. > > > > Yes, strong point. > > > >> It precludes the above possibilities to handle fault in qemu process. > > > > I agree. > > > > > > -- > > error compiling committee.c: too many arguments to function > > > > -- > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at ??http://vger.kernel.org/majordomo-info.html > > > > -- > " The production of too many useful things results in too many useless people" >
On Thu, Jan 12, 2012 at 03:59:59PM +0200, Avi Kivity wrote: > On 01/04/2012 05:03 AM, Isaku Yamahata wrote: > > Yes, it's quite doable in user space(qemu) with a kernel-enhancement. > > And it would be easy to convert a separated daemon process into a thread > > in qemu. > > > > I think it should be done out side of qemu process for some reasons. > > (I just repeat same discussion at the KVM-forum because no one remembers > > it) > > > > - ptrace (and its variant) > > Some people want to investigate guest ram on host (qemu stopped or lively). > > For example, enhance crash utility and it will attach qemu process and > > debug guest kernel. > > To debug the guest kernel you don't need to stop qemu itself. I agree > it's a problem for qemu debugging though. But you need to debug postcopy migration too with gdb or not? I don't see a big benefit in trying to prevent gdb to see really what is going on in the qemu image. > > - core dump > > qemu process may core-dump. > > As postmortem analysis, people want to investigate guest RAM. > > Again enhance crash utility and it will read the core file and analyze > > guest kernel. > > When creating core, the qemu process is already dead. > > Yes, strong point. > > > It precludes the above possibilities to handle fault in qemu process. > > I agree. In the receiving node if the memory is not there yet (and it isn't), not sure how you plan to get a clean core dump (like if live migration wasn't running) by preventing the kernel from dumping zeroes if qemu crashes during post copy migration. Surely it won't be the kernel crash handler completing the post migration, it won't even know where to write the data in memory.
One more question. Does your architecture/implementation (in theory) allow KVM memory features like swap, KSM, THP? On Fri, Jan 13, 2012 at 11:03:23AM +0900, Isaku Yamahata wrote: > Very interesting. We can cooperate for better (postcopy) live migration. > The code doesn't seem available yet, I'm eager for it. > > > On Fri, Jan 13, 2012 at 01:09:30AM +0000, Benoit Hudzia wrote: > > Hi, > > > > Sorry to jump to hijack the thread like that , however i would like > > to just to inform you that we recently achieve a milestone out of the > > research project I'm leading. We enhanced KVM in order to deliver > > post copy live migration using RDMA at kernel level. > > > > Few point on the architecture of the system : > > > > * RDMA communication engine in kernel ( you can use soft iwarp or soft > > ROCE if you don't have hardware acceleration, however we also support > > standard RDMA enabled NIC) . > > Do you mean infiniband subsystem? > > > > * Naturally Page are transferred with Zerop copy protocol > > * Leverage the async page fault system. > > * Pre paging / faulting > > * No context switch as everything is handled within kernel and using > > the page fault system. > > * Hybrid migration ( pre + post copy) available > > Ah, I've been also planing this. > After pre-copy phase, is the dirty bitmap sent? > > So far I've thought naively that pre-copy phase would be finished by the > number of iterations. On the other hand your choice is timeout of > pre-copy phase. Do you have rationale? or it was just natural for you? > > > > * Rely on an independent Kernel Module > > * No modification to the KVM kernel Module > > * Minimal Modification to the Qemu-Kvm code > > * We plan to add the page prioritization algo in order to optimise the > > pre paging algo and background transfer > > Where do you plan to implement? in qemu or in your kernel module? > This algo could be shared. > > thanks in advance. > > > You can learn a little bit more and see a demo here: > > http://tinyurl.com/8xa2bgl > > I hope to be able to provide more detail on the design soon. As well > > as more concrete demo of the system ( live migration of VM running > > large enterprise apps such as ERP or In memory DB) > > > > Note: this is just a step stone as the post copy live migration mainly > > enable us to validate the architecture design and code. > > > > Regards > > Benoit > > > > > > > > > > > > > > > > Regards > > Benoit > > > > > > On 12 January 2012 13:59, Avi Kivity <avi@redhat.com> wrote: > > > On 01/04/2012 05:03 AM, Isaku Yamahata wrote: > > >> Yes, it's quite doable in user space(qemu) with a kernel-enhancement. > > >> And it would be easy to convert a separated daemon process into a thread > > >> in qemu. > > >> > > >> I think it should be done out side of qemu process for some reasons. > > >> (I just repeat same discussion at the KVM-forum because no one remembers > > >> it) > > >> > > >> - ptrace (and its variant) > > >> ?? Some people want to investigate guest ram on host (qemu stopped or lively). > > >> ?? For example, enhance crash utility and it will attach qemu process and > > >> ?? debug guest kernel. > > > > > > To debug the guest kernel you don't need to stop qemu itself. ?? I agree > > > it's a problem for qemu debugging though. > > > > > >> > > >> - core dump > > >> ?? qemu process may core-dump. > > >> ?? As postmortem analysis, people want to investigate guest RAM. > > >> ?? Again enhance crash utility and it will read the core file and analyze > > >> ?? guest kernel. > > >> ?? When creating core, the qemu process is already dead. > > > > > > Yes, strong point. > > > > > >> It precludes the above possibilities to handle fault in qemu process. > > > > > > I agree. > > > > > > > > > -- > > > error compiling committee.c: too many arguments to function > > > > > > -- > > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > > the body of a message to majordomo@vger.kernel.org > > > More majordomo info at ??http://vger.kernel.org/majordomo-info.html > > > > > > > > -- > > " The production of too many useful things results in too many useless people" > > > > -- > yamahata >
Yes we plan to release patch as soon as we cleaned up the code and we get the green light from our company ( and sadly it can take month on that point..) On 13 January 2012 01:31, Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> wrote: > (2012/01/13 10:09), Benoit Hudzia wrote: >> >> Hi, >> >> Sorry to jump to hijack the thread like that , however i would like >> to just to inform you that we recently achieve a milestone out of the >> research project I'm leading. We enhanced KVM in order to deliver >> post copy live migration using RDMA at kernel level. >> >> Few point on the architecture of the system : >> >> * RDMA communication engine in kernel ( you can use soft iwarp or soft >> ROCE if you don't have hardware acceleration, however we also support >> standard RDMA enabled NIC) . >> * Naturally Page are transferred with Zerop copy protocol >> * Leverage the async page fault system. >> * Pre paging / faulting >> * No context switch as everything is handled within kernel and using >> the page fault system. >> * Hybrid migration ( pre + post copy) available >> * Rely on an independent Kernel Module >> * No modification to the KVM kernel Module >> * Minimal Modification to the Qemu-Kvm code >> * We plan to add the page prioritization algo in order to optimise the >> pre paging algo and background transfer >> >> >> You can learn a little bit more and see a demo here: >> http://tinyurl.com/8xa2bgl >> I hope to be able to provide more detail on the design soon. As well >> as more concrete demo of the system ( live migration of VM running >> large enterprise apps such as ERP or In memory DB) >> >> Note: this is just a step stone as the post copy live migration mainly >> enable us to validate the architecture design and code. > > > Do you have any plan to send the patch series of your implementation? > > Takuya
On 13 January 2012 02:03, Isaku Yamahata <yamahata@valinux.co.jp> wrote: > Very interesting. We can cooperate for better (postcopy) live migration. > The code doesn't seem available yet, I'm eager for it. > > > On Fri, Jan 13, 2012 at 01:09:30AM +0000, Benoit Hudzia wrote: >> Hi, >> >> Sorry to jump to hijack the thread like that , however i would like >> to just to inform you that we recently achieve a milestone out of the >> research project I'm leading. We enhanced KVM in order to deliver >> post copy live migration using RDMA at kernel level. >> >> Few point on the architecture of the system : >> >> * RDMA communication engine in kernel ( you can use soft iwarp or soft >> ROCE if you don't have hardware acceleration, however we also support >> standard RDMA enabled NIC) . > > Do you mean infiniband subsystem? Yes, basically any software or hardware implementation that support the standard RDMA / OFED vverbs stack in kernel. > > >> * Naturally Page are transferred with Zerop copy protocol >> * Leverage the async page fault system. >> * Pre paging / faulting >> * No context switch as everything is handled within kernel and using >> the page fault system. >> * Hybrid migration ( pre + post copy) available > > Ah, I've been also planing this. > After pre-copy phase, is the dirty bitmap sent? We sent over the dirty bitmap yes. In order to identify what is left to be transferred . And combined with the priority algo we will then prioritise the page for the background transfer. > > So far I've thought naively that pre-copy phase would be finished by the > number of iterations. On the other hand your choice is timeout of > pre-copy phase. Do you have rationale? or it was just natural for you? The main rational behind that is any normal sys admin tend to to be human and live migration iteration cycle has no meaning for him. As a result we preferred to provide a time constraint rather than an iteration constraint. Also it is hard to estimate how much time bandwidth would be use per iteration cycle which led to poor determinism. > > >> * Rely on an independent Kernel Module >> * No modification to the KVM kernel Module >> * Minimal Modification to the Qemu-Kvm code >> * We plan to add the page prioritization algo in order to optimise the >> pre paging algo and background transfer > > Where do you plan to implement? in qemu or in your kernel module? > This algo could be shared. Yes we plan to actually release the algo first before the RDMA post copy. The algo can be use for standard optimisation of the normal pre-copy process (as demosntrated in my talk at KVM-forum). And if the priority is reverse for the post copy page pull. My colleague Aidan shribman is done with the implentation and we are now in testing phase in order to quantify the improvement. > > thanks in advance. > >> You can learn a little bit more and see a demo here: >> http://tinyurl.com/8xa2bgl >> I hope to be able to provide more detail on the design soon. As well >> as more concrete demo of the system ( live migration of VM running >> large enterprise apps such as ERP or In memory DB) >> >> Note: this is just a step stone as the post copy live migration mainly >> enable us to validate the architecture design and code. >> >> Regards >> Benoit >> >> >> >> >> >> >> >> Regards >> Benoit >> >> >> On 12 January 2012 13:59, Avi Kivity <avi@redhat.com> wrote: >> > On 01/04/2012 05:03 AM, Isaku Yamahata wrote: >> >> Yes, it's quite doable in user space(qemu) with a kernel-enhancement. >> >> And it would be easy to convert a separated daemon process into a thread >> >> in qemu. >> >> >> >> I think it should be done out side of qemu process for some reasons. >> >> (I just repeat same discussion at the KVM-forum because no one remembers >> >> it) >> >> >> >> - ptrace (and its variant) >> >> ?? Some people want to investigate guest ram on host (qemu stopped or lively). >> >> ?? For example, enhance crash utility and it will attach qemu process and >> >> ?? debug guest kernel. >> > >> > To debug the guest kernel you don't need to stop qemu itself. ?? I agree >> > it's a problem for qemu debugging though. >> > >> >> >> >> - core dump >> >> ?? qemu process may core-dump. >> >> ?? As postmortem analysis, people want to investigate guest RAM. >> >> ?? Again enhance crash utility and it will read the core file and analyze >> >> ?? guest kernel. >> >> ?? When creating core, the qemu process is already dead. >> > >> > Yes, strong point. >> > >> >> It precludes the above possibilities to handle fault in qemu process. >> > >> > I agree. >> > >> > >> > -- >> > error compiling committee.c: too many arguments to function >> > >> > -- >> > To unsubscribe from this list: send the line "unsubscribe kvm" in >> > the body of a message to majordomo@vger.kernel.org >> > More majordomo info at ??http://vger.kernel.org/majordomo-info.html >> >> >> >> -- >> " The production of too many useful things results in too many useless people" >> > > -- > yamahata
On 13 January 2012 02:15, Isaku Yamahata <yamahata@valinux.co.jp> wrote: > One more question. > Does your architecture/implementation (in theory) allow KVM memory > features like swap, KSM, THP? * Swap: Yes we support swap to disk ( the page is pulled from swap before being send over), swap process do its job on the other side. * KSM : same , we support KSM, the KSMed page is broken down and split and they are send individually ( yes sub optimal but make the protocol less messy) and we let the KSM daemon do its job on the other side. * THP : more sticky here. Due to time constraint we decided that we will be partially supporting it. What does it means: if we encounter THP we break them down in standard page granularity as it is our current memory unit we are manipulating. As a result you can have THP on the source but you won't have THP on the other side. _ Note , we didn't explore fully the ramification of THP with RDMA, i don't know if THP play well with the MMU of HW RDMA NIC, One thing i would like to explore is if it is possible to break down the THP in standard page and then reassemble them on the other side ( do any one fo you know if it is possible to aggregate page to for a THP in kernel ? ) * cgroup : should be transparently working, but we need to do more testing to confirm that . > > > On Fri, Jan 13, 2012 at 11:03:23AM +0900, Isaku Yamahata wrote: >> Very interesting. We can cooperate for better (postcopy) live migration. >> The code doesn't seem available yet, I'm eager for it. >> >> >> On Fri, Jan 13, 2012 at 01:09:30AM +0000, Benoit Hudzia wrote: >> > Hi, >> > >> > Sorry to jump to hijack the thread like that , however i would like >> > to just to inform you that we recently achieve a milestone out of the >> > research project I'm leading. We enhanced KVM in order to deliver >> > post copy live migration using RDMA at kernel level. >> > >> > Few point on the architecture of the system : >> > >> > * RDMA communication engine in kernel ( you can use soft iwarp or soft >> > ROCE if you don't have hardware acceleration, however we also support >> > standard RDMA enabled NIC) . >> >> Do you mean infiniband subsystem? >> >> >> > * Naturally Page are transferred with Zerop copy protocol >> > * Leverage the async page fault system. >> > * Pre paging / faulting >> > * No context switch as everything is handled within kernel and using >> > the page fault system. >> > * Hybrid migration ( pre + post copy) available >> >> Ah, I've been also planing this. >> After pre-copy phase, is the dirty bitmap sent? >> >> So far I've thought naively that pre-copy phase would be finished by the >> number of iterations. On the other hand your choice is timeout of >> pre-copy phase. Do you have rationale? or it was just natural for you? >> >> >> > * Rely on an independent Kernel Module >> > * No modification to the KVM kernel Module >> > * Minimal Modification to the Qemu-Kvm code >> > * We plan to add the page prioritization algo in order to optimise the >> > pre paging algo and background transfer >> >> Where do you plan to implement? in qemu or in your kernel module? >> This algo could be shared. >> >> thanks in advance. >> >> > You can learn a little bit more and see a demo here: >> > http://tinyurl.com/8xa2bgl >> > I hope to be able to provide more detail on the design soon. As well >> > as more concrete demo of the system ( live migration of VM running >> > large enterprise apps such as ERP or In memory DB) >> > >> > Note: this is just a step stone as the post copy live migration mainly >> > enable us to validate the architecture design and code. >> > >> > Regards >> > Benoit >> > >> > >> > >> > >> > >> > >> > >> > Regards >> > Benoit >> > >> > >> > On 12 January 2012 13:59, Avi Kivity <avi@redhat.com> wrote: >> > > On 01/04/2012 05:03 AM, Isaku Yamahata wrote: >> > >> Yes, it's quite doable in user space(qemu) with a kernel-enhancement. >> > >> And it would be easy to convert a separated daemon process into a thread >> > >> in qemu. >> > >> >> > >> I think it should be done out side of qemu process for some reasons. >> > >> (I just repeat same discussion at the KVM-forum because no one remembers >> > >> it) >> > >> >> > >> - ptrace (and its variant) >> > >> ?? Some people want to investigate guest ram on host (qemu stopped or lively). >> > >> ?? For example, enhance crash utility and it will attach qemu process and >> > >> ?? debug guest kernel. >> > > >> > > To debug the guest kernel you don't need to stop qemu itself. ?? I agree >> > > it's a problem for qemu debugging though. >> > > >> > >> >> > >> - core dump >> > >> ?? qemu process may core-dump. >> > >> ?? As postmortem analysis, people want to investigate guest RAM. >> > >> ?? Again enhance crash utility and it will read the core file and analyze >> > >> ?? guest kernel. >> > >> ?? When creating core, the qemu process is already dead. >> > > >> > > Yes, strong point. >> > > >> > >> It precludes the above possibilities to handle fault in qemu process. >> > > >> > > I agree. >> > > >> > > >> > > -- >> > > error compiling committee.c: too many arguments to function >> > > >> > > -- >> > > To unsubscribe from this list: send the line "unsubscribe kvm" in >> > > the body of a message to majordomo@vger.kernel.org >> > > More majordomo info at ??http://vger.kernel.org/majordomo-info.html >> > >> > >> > >> > -- >> > " The production of too many useful things results in too many useless people" >> > >> >> -- >> yamahata >> > > -- > yamahata
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b63f5f7..85530fc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2807,6 +2807,7 @@ int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm, return ret; } +EXPORT_SYMBOL_GPL(mem_cgroup_cache_charge); /* * While swap-in, try_charge -> commit or cancel, the page is locked. diff --git a/mm/shmem.c b/mm/shmem.c index d672250..d137a37 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2546,6 +2546,7 @@ int shmem_zero_setup(struct vm_area_struct *vma) vma->vm_flags |= VM_CAN_NONLINEAR; return 0; } +EXPORT_SYMBOL_GPL(shmem_zero_setup); /** * shmem_read_mapping_page_gfp - read into page cache, using specified page allocation flags.