Message ID | 1330680137-6601-1-git-send-email-batuzovk@ispras.ru |
---|---|
State | New |
Headers | show |
On 02.03.2012 13:22, Kirill Batuzov wrote: > Currently large memory chunk allocation with tcg_malloc is broken. An attempt > to allocate such chunk when pool_current field of TCGContext is not NULL will > result in circular links in list of memory pools: > > p = new pool; > s->pool_current->next = p; > p->next = s->pool_current; > (in tcg_malloc_internal) > > Later p became a current pool, and current pool became next pool. Next > tcg_malloc will switch current pool to next pool ('previous' current pool) > and will start allocating memory from it's beginning. But some memory in > the beginning of this pool was already allocated and will be used twice > for different arrays. > > At the end of this cover letter there is a patch that demonstrates the > problem. It breaks current trunk on the first translation block containing > labels. > > Large memory pools can not be reused by memory allocator for big allocations > and an attempt to reuse them for small allocations may result in an infinite > increase of memory consumption during run time. Memory consumption would > increase every time a new large chunk of memory is allocated. If code > allocates such chunk on every translation block (like patch at the end of this > letter do) then memory consumption would increase with every new block > translated. > > My fix for the problems mentioned above is in the second e-mail. I moved large > memory pools to a separate list and free them on pool_reset. > As I understand, your approach removes linking back to the previous allocated chunk to avoid usage of already allocated and used memory again. Also you added g_free() to tcg_pool_reset(). Wouldn't it slow down emulation? Maybe such linkage to previous chunk was made with assumption that big allocations will not happen twice one after another? For example, while loading kernel on realview and exynos, the code never reaches this string in tcg_malloc_internal(): p->next = s->pool_current; If this assumption is correct, maybe it's better to just insert a verification that allocated chunk has enough space to hold requested block? + if (s->pool_cur - p->data + size > p->size) { + goto new_pool; + } s->pool_current = p; s->pool_cur = p->data + size; s->pool_end = p->data + p->size; return p->data;
On Mon, 5 Mar 2012, Evgeny Voevodin wrote: > > As I understand, your approach removes linking back to the previous allocated > chunk to avoid usage of already allocated and used memory again. Also you > added g_free() to tcg_pool_reset(). Wouldn't it slow down emulation? No, it would not. I added g_free() for large memory chunks only. Emulation runs which use only small allocations are unaffected. If an emulation run uses a large memory allocation - well, it was broken before. So there is nothing to compare speed to in this case. > Maybe such linkage to previous chunk was made with assumption that big > allocations will not happen twice one after another? For example, while > loading kernel on realview and exynos, the code never reaches this string in > tcg_malloc_internal(): > > p->next = s->pool_current; > The corresponding code had not changed since 2008. And, probably, never worked. It would be hard to know now what the author wanted from this code back then. > If this assumption is correct, maybe it's better to just insert a verification > that allocated chunk has enough space to hold requested block? > > + if (s->pool_cur - p->data + size > p->size) { > + goto new_pool; > + } > > s->pool_current = p; > s->pool_cur = p->data + size; > s->pool_end = p->data + p->size; > return p->data; > First, this verification does not solve problem of circular links in the list. Second, it does not solve problem of unbound increase of memory consumption. I probably should clarify that I found this issue when I was writing an unrelated modification to the register allocator which required large memory allocations. Current code in TCG probably never does such allocations and does not trigger memory corruption.
Ping? Somebody please review this patch... And whom should I Cc in case of changes to tcg/ ? File MAINTAINERS lists only qemu-devel for this subsystem. On Fri, 2 Mar 2012, Kirill Batuzov wrote: > Currently large memory chunk allocation with tcg_malloc is broken. An attempt > to allocate such chunk when pool_current field of TCGContext is not NULL will > result in circular links in list of memory pools: > > p = new pool; > s->pool_current->next = p; > p->next = s->pool_current; > (in tcg_malloc_internal) > > Later p became a current pool, and current pool became next pool. Next > tcg_malloc will switch current pool to next pool ('previous' current pool) > and will start allocating memory from it's beginning. But some memory in > the beginning of this pool was already allocated and will be used twice > for different arrays. > > At the end of this cover letter there is a patch that demonstrates the > problem. It breaks current trunk on the first translation block containing > labels. > > Large memory pools can not be reused by memory allocator for big allocations > and an attempt to reuse them for small allocations may result in an infinite > increase of memory consumption during run time. Memory consumption would > increase every time a new large chunk of memory is allocated. If code > allocates such chunk on every translation block (like patch at the end of this > letter do) then memory consumption would increase with every new block > translated. > > My fix for the problems mentioned above is in the second e-mail. I moved large > memory pools to a separate list and free them on pool_reset. > > By the way: is there any particular reason for labels array in TCGContex to be > allocated dynamically? It has constant size and is allocated unconditionally > for each translation block. > > Kirill Batuzov (1): > Fix large memory chunks allocation with tcg_malloc. > > tcg/tcg.c | 14 +++++++++----- > tcg/tcg.h | 2 +- > 2 files changed, 10 insertions(+), 6 deletions(-) > > --- > diff --git a/tcg/tcg.c b/tcg/tcg.c > index 351a0a3..6dd54e6 100644 > --- a/tcg/tcg.c > +++ b/tcg/tcg.c > @@ -265,6 +265,8 @@ void tcg_set_frame(TCGContext *s, int reg, > s->frame_reg = reg; > } > > +uint8_t *p; > + > void tcg_func_start(TCGContext *s) > { > int i; > @@ -273,6 +275,7 @@ void tcg_func_start(TCGContext *s) > for(i = 0; i < (TCG_TYPE_COUNT * 2); i++) > s->first_free_temp[i] = -1; > s->labels = tcg_malloc(sizeof(TCGLabel) * TCG_MAX_LABELS); > + p = tcg_malloc(TCG_POOL_CHUNK_SIZE + 1); > s->nb_labels = 0; > s->current_frame_offset = s->frame_start; >
diff --git a/tcg/tcg.c b/tcg/tcg.c index 351a0a3..6dd54e6 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -265,6 +265,8 @@ void tcg_set_frame(TCGContext *s, int reg, s->frame_reg = reg; } +uint8_t *p; + void tcg_func_start(TCGContext *s) { int i; @@ -273,6 +275,7 @@ void tcg_func_start(TCGContext *s) for(i = 0; i < (TCG_TYPE_COUNT * 2); i++) s->first_free_temp[i] = -1; s->labels = tcg_malloc(sizeof(TCGLabel) * TCG_MAX_LABELS); + p = tcg_malloc(TCG_POOL_CHUNK_SIZE + 1); s->nb_labels = 0; s->current_frame_offset = s->frame_start; -- 1.7.5.4